title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 18
values | text
stringlengths 0
8.42M
|
---|---|---|---|---|
Preventive and curative dental services utilization among children aged 12 years and younger in Tehran, Iran, based on the Andersen behavioral model: A generalized structural equation modeling | 32a36c40-fe9b-4899-8cca-dfd25e52a17a | 11737785 | Dentistry[mh] | The sustainable development agenda, adopted by all United Nations member states in 2015, outlines a collective plan to achieve peace and prosperity for everyone, both now and in the future. At its core are the 17 Sustainable Development Goals (SDGs) and 169 targets, which serve as urgent calls to action for both developed and developing countries. They recognize that ending poverty and other deprivations must go mutually with strategies that improve health and education, reduce inequality, and spur economic growth while preserving the environment. Ensuring the use of essential health services in the context of universal health coverage (UHC) is a prominent target of the SDGs, particularly SDG 3, which the member states are committed to achieving by 2030. SDG3 emphasizes good health and well-being for all . Oral health is an inseparable part of health, encompassing a large segment of society and influencing their quality of life and financial resources . Oral diseases are preventable, and appropriate oral health behaviors, including regular dental visits, are crucial. Nevertheless, not all people use these services regularly. Neglect of dental services utilization is more common in children than older age groups. A study in Brazil showed that 11.7% of individuals under 19 had never had a dental visit . An integrative review by Curi et al. emphasizes age as an important indicator of dental services utilization. These results may be related to the cumulative effect of caries and the lack of parents’ knowledge about the importance of early dental visits . Regular dental visits in children can reduce dental caries, alleviate dental treatment costs, assess proper growth processes (such as anomalies and tooth eruption), and improve oral health-related quality of life . Previous literature suggests that no single factor stands out as the most significant barrier to oral health care utilization; instead, several socioeconomic, familial, and mental factors affect dental visits by children. A study in Al-Madinah, Saudi Arabia, showed that children from high-income families use dental services more than those from lower-income families . Another study in Brazil reported that the utilization of dental services was associated with children’s age, mothers’ education, family income, and dental caries . A study in Lebanon highlighted the importance of economic and familial factors in dental services utilization among children . Xu M et al. indicated that need factors were crucial in dental services utilization, whereas income showed no significant association . Research on dental service utilization is vital to preparing equitable, high-quality health services for all individuals, communities, and populations. Theoretical models to explain health service utilization are crucial for guiding health services exploration. Using theoretical models in empirical designs and concepts is vital for substantially improving empirical design and outcomes for researchers . Andersen’s behavior model of health services utilization is a well-known framework for assessing the multifactorial nature of health service use. A recent scoping review suggests that Andersen’s theoretical model pivotally contributed to developing lasting core constructs, such as sociodemographics, health behaviors, and health system factors, in existing theoretical models to clarify the utilization of health services . This model, established in 1968 and modified several times over the years, suggests that health services utilization is related to three groups of factors: predisposing (demographic, social structure, and health beliefs), enabling (personal/family and social), and need (perceived and evaluated) . The primary objective of this study was to assess the utilization of preventive and curative dental services among children aged 12 years and younger in Tehran, Iran. We used the Andersen behavioral model and generalized structural equation modeling to achieve this goal. By doing so, we aimed to provide insights into the factors influencing dental services utilization among children, thereby contributing to proper decision-making and addressing future challenges for policymakers. This study was part of a cross-sectional, population-based telephone survey conducted in Tehran, the capital of Iran, in 2023. A total of 886 children aged 12 years and younger (423 children aged six years and younger and 463 children aged 7 to 12) were included in this study. Proportionate stratified random sampling was conducted to achieve a representative sample of children aged 12 and younger in 22 Tehran districts (Strata). Based on the results of two pilot studies (face-to-face and telephone-based), we opted for telephone-based sampling as it offers a quick, low-cost representative sampling with a comfortable environment for participants . It was necessary to ensure that the sample size was adequate for conducting a structural equation model, as recommended by Ramlall, who suggests a minimum of twenty observations per observed variable . In our study, with 14 observed variables based on the theoretical model , this requirement translated to a minimum sample size of 280, which we comfortably exceeded with our sample of 886 children. Eligibility criteria All children 12 years and younger who lived in Tehran and whose parents had access to a landline or mobile telephone and accepted participation in the study were included. Children whose parents were unable to answer questions due to hearing impairment or mental disorders were excluded from this program. Sample selection and data collection The data collection was conducted between 1st Nov 2022 and 3rd Apr 2023. For recruitment, a list of phone numbers (75 numbers for each district), including landlines and cell phones, was generated using a random number-generating method, considering the area code of the desired district. Interviewers randomly selected phone numbers from this generated list. Twelve trained interviewers conducted calls to participants, where questions were posed to the parents. All children 12 years old and younger possessing eligibility criteria were included in each family. The process continued until the sample size for each district was achieved. Two phone calls were scheduled, the first in the morning and the second one was considered if the first one was not answered during the non-working hours in the evening. Each interview took 15 to 20 minutes. Before starting the interviews, a two-hour orientation meeting was conducted to train the interviewers and explain the project objectives. Two monitored interviews were conducted. Project managers provided feedback to interviewers, answered their questions, and calibrated them. During data collection, an informed supervisor was present to check the accuracy and quality of the process. Additionally, the interviewers were provided with a telephone number so the project manager could resolve any issues during the data collection. Questionnaire The tool used in this study was a comprehensive questionnaire designed by reviewing nationally and internationally approved questionnaires . An expert panel of 13 professors specializing in community oral health, pediatric dentistry, public health, and epidemiology assessed the validity of the questionnaire. The expert panel selected the most relevant items to ensure clarity of the questions, interpretability, and accuracy across the questionnaire domains. This panel also evaluated the content validity of the items, assessing their relevance, coverage, and representativeness. The quantitative assessment involved measuring the "Content Validity Ratio" (CVR) and "Content Validity Index" (CVI). Modifications were made to address any contentious items until a consensus was reached. Furthermore, the questionnaire was piloted with 20 individuals from the target population outside the study sample. The questionnaire was again administered to the same 20 individuals using the test-retest method to assess reliability, achieving an actual agreement of more than 90%. The questionnaire in this study followed Andersen’s model components . The primary outcome of interest was dental care utilization, assessed through two questions: "Did your child have a dental visit in the past year?" Those who answered "yes" were further queried with: "Which type of dental services did your child receive? (curative or preventive/consultation)". Finally, dental service utilization was categorized into three groups: no utilization, curative services utilization, and preventive/consultation services utilization. Specific measures were selected for inclusion as predictor variables, aligning with components from the Andersen model including a) predisposing factors such as age, gender, parents’ oral health knowledge, head of household education, oral health behaviors (tooth brushing, snack consumption), and dental visits before age one; b) enabling factors encompassing socioeconomic variables (monthly income, residential district), basic insurance, and dental insurance; and c) patient need items including parent-perceived oral health and perceived oral health needs . Data handling and statistical analysis The SPSS software, version 21, was utilized to summarize sample characteristics. Mean and standard deviation (SD) were used for continuous variables, while frequencies were used for categorical variables. The parents’ oral health knowledge score was computed as the total sum of scores obtained from seven questions. Correct answers were scored as one, and incorrect (including "I do not know") were scored as zero. Other covariates included head of household education (categorized as less than diploma, diploma, associate and bachelor, master and more), basic insurance (yes or no), dental insurance (yes or no), parent-perceived oral health needs in the past year (yes or no), and parent-perceived oral health (categorized as very poor, poor, moderate, good, very good, excellent). Missing data were generally low, at most 5% for any variables except income and dental insurance. For these variables, missing data were imputed using the Expectation-Maximization (EM) algorithm in SPSS software version 21. Equation-wise deletion was applied to handle missing data for other variables based on the structural equation modeling (SEM) rules. Post-stratification survey weighting adjustments were conducted to mitigate inherent biases in the survey design and minimize the impact of challenges during data collection, such as overrepresented or underrepresented demographic groups, which could influence the results. Weights were calculated and applied to specific demographic variables, including the age and sex of the general population of children aged 12 years and younger in Tehran . Generalized structural equation modeling The study treated oral health behavior and socioeconomic status as latent constructs. Oral health behavior was measured through two questions: 1) "How often does your child brush their teeth?" (options: irregularly, once a day, more than once a day) and 2) "How often does your child consume snacks and sweet beverages?" (options: three times a day or more, once or twice a day, every week, every month or less). The history of dental visits before age one was considered an oral health behavior component. Still, it was treated separately due to its retrospective nature and added to the model as a distinct observed variable. Two variables were used to measure socioeconomic status. First, household income per month was categorized into four groups: very poor (100 USD or less), poor (100–200 USD), moderate (200–300 USD), and rich/very rich (more than 300 USD). Second, the residence district in Tehran, which has 22 districts, was categorized into four strata based on a previous study that ranked the districts in terms of development and quality of life . Affluent districts include districts 1–3, 6, and 22; moderate districts include 4, 5, 8, 13, 20, and 21; lower affluent districts include districts 7, 9, 11,12, 14–16, and19; and need intervention districts include districts 10, 17 and 18. We merged the two last groups and used the district variable as a trichotomous variable in the analysis (Affluent, moderate, and lower effluent). The STATA software, version 17, was used for model construction. Variables with a p-value <0.2 in the bivariate analysis were considered. Given our multilevel categorical outcome variable, we utilized the generalized structural equation model (GSEM) to construct a model based on the three categories of variables in the Andersen behavioral model. GSEM, a type of structural equation modeling (SEM), accommodates both categorical and continuous outcomes and allows for any combination of observed variables in the model . We utilized various model specifications, including logit and ordinal links and families such as Bernoulli, multinomial, and Gaussian, within the generalized structural equation model (GSEM). The model was separately applied to two age groups: children aged six years and younger and children aged 7 to 12 years, allowing for comparison between these groups. illustrates our theoretical model, depicting the direct and indirect effects of predisposing, enabling, and need factors on dental services utilization. Receiver operating characteristic (ROC) analysis was employed to evaluate the model’s predictive ability. All analyses were performed using the STATA software, version 17, with significance set at a p-value <0.05. Ethical considerations At the outset, the interviewers introduced themselves, clarified the research objectives, and assured participants were interested in being involved in the study, emphasizing that names did not need to be disclosed and that the interviews were conducted in a confidential environment. As our study was a phone survey, we received verbal consent from the parents by asking whether they were willing to participate in this study. Their response was recorded via phone call. If someone is not eager to participate, they will not collaborate. Participants were informed of their right to cease participation or withdraw from the study at any time. In cases of technical issues or participant preference to terminate the interview prematurely, replacements were sought from the same age group and residential area. To ensure the quality of data collection, a quality control team reviewed ten percent of randomly selected interviews through recorded phone calls. Ethical clearance was received from the Ethics Committee of Tehran University of Medical Science (IR.TUMS.DENTISTRY.REC.1401.094). All children 12 years and younger who lived in Tehran and whose parents had access to a landline or mobile telephone and accepted participation in the study were included. Children whose parents were unable to answer questions due to hearing impairment or mental disorders were excluded from this program. The data collection was conducted between 1st Nov 2022 and 3rd Apr 2023. For recruitment, a list of phone numbers (75 numbers for each district), including landlines and cell phones, was generated using a random number-generating method, considering the area code of the desired district. Interviewers randomly selected phone numbers from this generated list. Twelve trained interviewers conducted calls to participants, where questions were posed to the parents. All children 12 years old and younger possessing eligibility criteria were included in each family. The process continued until the sample size for each district was achieved. Two phone calls were scheduled, the first in the morning and the second one was considered if the first one was not answered during the non-working hours in the evening. Each interview took 15 to 20 minutes. Before starting the interviews, a two-hour orientation meeting was conducted to train the interviewers and explain the project objectives. Two monitored interviews were conducted. Project managers provided feedback to interviewers, answered their questions, and calibrated them. During data collection, an informed supervisor was present to check the accuracy and quality of the process. Additionally, the interviewers were provided with a telephone number so the project manager could resolve any issues during the data collection. Questionnaire The tool used in this study was a comprehensive questionnaire designed by reviewing nationally and internationally approved questionnaires . An expert panel of 13 professors specializing in community oral health, pediatric dentistry, public health, and epidemiology assessed the validity of the questionnaire. The expert panel selected the most relevant items to ensure clarity of the questions, interpretability, and accuracy across the questionnaire domains. This panel also evaluated the content validity of the items, assessing their relevance, coverage, and representativeness. The quantitative assessment involved measuring the "Content Validity Ratio" (CVR) and "Content Validity Index" (CVI). Modifications were made to address any contentious items until a consensus was reached. Furthermore, the questionnaire was piloted with 20 individuals from the target population outside the study sample. The questionnaire was again administered to the same 20 individuals using the test-retest method to assess reliability, achieving an actual agreement of more than 90%. The questionnaire in this study followed Andersen’s model components . The primary outcome of interest was dental care utilization, assessed through two questions: "Did your child have a dental visit in the past year?" Those who answered "yes" were further queried with: "Which type of dental services did your child receive? (curative or preventive/consultation)". Finally, dental service utilization was categorized into three groups: no utilization, curative services utilization, and preventive/consultation services utilization. Specific measures were selected for inclusion as predictor variables, aligning with components from the Andersen model including a) predisposing factors such as age, gender, parents’ oral health knowledge, head of household education, oral health behaviors (tooth brushing, snack consumption), and dental visits before age one; b) enabling factors encompassing socioeconomic variables (monthly income, residential district), basic insurance, and dental insurance; and c) patient need items including parent-perceived oral health and perceived oral health needs . The tool used in this study was a comprehensive questionnaire designed by reviewing nationally and internationally approved questionnaires . An expert panel of 13 professors specializing in community oral health, pediatric dentistry, public health, and epidemiology assessed the validity of the questionnaire. The expert panel selected the most relevant items to ensure clarity of the questions, interpretability, and accuracy across the questionnaire domains. This panel also evaluated the content validity of the items, assessing their relevance, coverage, and representativeness. The quantitative assessment involved measuring the "Content Validity Ratio" (CVR) and "Content Validity Index" (CVI). Modifications were made to address any contentious items until a consensus was reached. Furthermore, the questionnaire was piloted with 20 individuals from the target population outside the study sample. The questionnaire was again administered to the same 20 individuals using the test-retest method to assess reliability, achieving an actual agreement of more than 90%. The questionnaire in this study followed Andersen’s model components . The primary outcome of interest was dental care utilization, assessed through two questions: "Did your child have a dental visit in the past year?" Those who answered "yes" were further queried with: "Which type of dental services did your child receive? (curative or preventive/consultation)". Finally, dental service utilization was categorized into three groups: no utilization, curative services utilization, and preventive/consultation services utilization. Specific measures were selected for inclusion as predictor variables, aligning with components from the Andersen model including a) predisposing factors such as age, gender, parents’ oral health knowledge, head of household education, oral health behaviors (tooth brushing, snack consumption), and dental visits before age one; b) enabling factors encompassing socioeconomic variables (monthly income, residential district), basic insurance, and dental insurance; and c) patient need items including parent-perceived oral health and perceived oral health needs . The SPSS software, version 21, was utilized to summarize sample characteristics. Mean and standard deviation (SD) were used for continuous variables, while frequencies were used for categorical variables. The parents’ oral health knowledge score was computed as the total sum of scores obtained from seven questions. Correct answers were scored as one, and incorrect (including "I do not know") were scored as zero. Other covariates included head of household education (categorized as less than diploma, diploma, associate and bachelor, master and more), basic insurance (yes or no), dental insurance (yes or no), parent-perceived oral health needs in the past year (yes or no), and parent-perceived oral health (categorized as very poor, poor, moderate, good, very good, excellent). Missing data were generally low, at most 5% for any variables except income and dental insurance. For these variables, missing data were imputed using the Expectation-Maximization (EM) algorithm in SPSS software version 21. Equation-wise deletion was applied to handle missing data for other variables based on the structural equation modeling (SEM) rules. Post-stratification survey weighting adjustments were conducted to mitigate inherent biases in the survey design and minimize the impact of challenges during data collection, such as overrepresented or underrepresented demographic groups, which could influence the results. Weights were calculated and applied to specific demographic variables, including the age and sex of the general population of children aged 12 years and younger in Tehran . The study treated oral health behavior and socioeconomic status as latent constructs. Oral health behavior was measured through two questions: 1) "How often does your child brush their teeth?" (options: irregularly, once a day, more than once a day) and 2) "How often does your child consume snacks and sweet beverages?" (options: three times a day or more, once or twice a day, every week, every month or less). The history of dental visits before age one was considered an oral health behavior component. Still, it was treated separately due to its retrospective nature and added to the model as a distinct observed variable. Two variables were used to measure socioeconomic status. First, household income per month was categorized into four groups: very poor (100 USD or less), poor (100–200 USD), moderate (200–300 USD), and rich/very rich (more than 300 USD). Second, the residence district in Tehran, which has 22 districts, was categorized into four strata based on a previous study that ranked the districts in terms of development and quality of life . Affluent districts include districts 1–3, 6, and 22; moderate districts include 4, 5, 8, 13, 20, and 21; lower affluent districts include districts 7, 9, 11,12, 14–16, and19; and need intervention districts include districts 10, 17 and 18. We merged the two last groups and used the district variable as a trichotomous variable in the analysis (Affluent, moderate, and lower effluent). The STATA software, version 17, was used for model construction. Variables with a p-value <0.2 in the bivariate analysis were considered. Given our multilevel categorical outcome variable, we utilized the generalized structural equation model (GSEM) to construct a model based on the three categories of variables in the Andersen behavioral model. GSEM, a type of structural equation modeling (SEM), accommodates both categorical and continuous outcomes and allows for any combination of observed variables in the model . We utilized various model specifications, including logit and ordinal links and families such as Bernoulli, multinomial, and Gaussian, within the generalized structural equation model (GSEM). The model was separately applied to two age groups: children aged six years and younger and children aged 7 to 12 years, allowing for comparison between these groups. illustrates our theoretical model, depicting the direct and indirect effects of predisposing, enabling, and need factors on dental services utilization. Receiver operating characteristic (ROC) analysis was employed to evaluate the model’s predictive ability. All analyses were performed using the STATA software, version 17, with significance set at a p-value <0.05. At the outset, the interviewers introduced themselves, clarified the research objectives, and assured participants were interested in being involved in the study, emphasizing that names did not need to be disclosed and that the interviews were conducted in a confidential environment. As our study was a phone survey, we received verbal consent from the parents by asking whether they were willing to participate in this study. Their response was recorded via phone call. If someone is not eager to participate, they will not collaborate. Participants were informed of their right to cease participation or withdraw from the study at any time. In cases of technical issues or participant preference to terminate the interview prematurely, replacements were sought from the same age group and residential area. To ensure the quality of data collection, a quality control team reviewed ten percent of randomly selected interviews through recorded phone calls. Ethical clearance was received from the Ethics Committee of Tehran University of Medical Science (IR.TUMS.DENTISTRY.REC.1401.094). Description of the study population To achieve the sample size, 16258 calls were conducted, 5428 calls were answered, and 1322 calls led to the completion of the questionnaires. The sample consisted of 886 children aged 12 years old and younger, with 423 children aged 6 years and younger and 463 children aged 7–12 years old (mean = 6.71, SD = 3.32). Approximately 49.8% of the children were male . About 19% of the children lived in affluent districts, 28% in moderate districts, and 53% in lower affluent districts. Most household heads had an educational level ranging from a high school diploma to a bachelor’s degree (68.4%). Most children belonged to middle-income families (40.3%) . Sixty-four percent of children had basic insurance, and 32.8% had dental insurance. According to parental perception, the majority of children (37.6%) had good oral health, and 66.8% did not report oral health needs in the past year . The average score for parents’ oral health knowledge was 4.41 ± 1.40 (ranging from 0 to 7). Regarding oral health behaviors, 44.1% of children did not have a routine for tooth brushing, while 47.7% brushed their teeth once a day. Forty-four percent of children consumed snacks once or twice a day, and 25.3% consumed snacks three times a day or more. Additionally, 3.3% of children had a dental visit before age one . Overall, 57.2% of children did not use dental services, 22.1% used curative services, and 19.9% used preventive/consultation services . Predisposing factors and dental services utilization summarizes the frequency of dental services utilization based on predisposing, enabling, and need factors. The utilization rates for preventive/consultation and curative services were higher among children aged 7 to 12 compared to those aged six years and younger (20.7% versus 19.2% for preventive/consultation services and 29.9% versus 14% for curative services) . The percentage of children who did not use dental services in the past year was notably higher among children from households with lower levels of education among the household heads (67.7% for heads of household with less than a diploma, 64.8% for those with a high school diploma, 49.7% for those with an associate or bachelor’s degree, and 47.8% for those with a master’s degree or higher) . The pattern of dental service utilization had no significant difference in the two gender groups. The data in indicates that more regular tooth brushing correlates with a higher frequency of preventive/consultation services utilization (32.8% among those who brush their teeth more than once a day versus 12.9% among those who brush their teeth irregularly). Additionally, half of the children who had their first dental visit before the age of one received preventive/consultation services in the past year . Enabling factors and dental services utilization No dental services utilization was more common in less affluent districts, with 62.2% of children falling into this category. Conversely, in affluent districts, the frequency of preventive/consultation services utilization was higher at 27%, and curative services utilization was even higher at 29.4% . Regarding family income, there was a noticeable decrease in the frequency of no utilization of dental services among higher income groups, with 41.2% reporting no utilization compared to 65.7% in the lowest income group. Conversely, the trend was reversed for preventive/consultation services, with 30.9% utilization in the higher-income group versus 13% in the lowest-income group . Need factors and dental services utilization Most children whose parents perceived them to have oral health needs utilized curative services (52.9%). Conversely, among children without perceived oral health needs, the majority did not receive any dental services (70.1%). Moreover, the utilization of preventive/consultation services was more prevalent among children from families where parents perceived their oral health as better (27.6% in better-off families compared to 0% in the poorest families) . Generalized structural equation model results Figs and show the final generalized structural equation model. Tables and represent the GSEM results according to the two age groups. Causal network of dental services utilization in children aged 6 years old and younger Tooth brushing exhibited the strongest contribution to oral health behavior, with a robust coefficient of 1. The snack consumption showed a robust coefficient of 0.05, which was not statistically significant (p-value = 0.47). Regarding the socioeconomic construct, living in less affluent districts had the most significant impact (robust coefficient = -1.24, p-value <0.001). Living in moderate districts also showed a statistically significant effect (robust coefficient = -0.77, p-value = 0.007). Income had a robust coefficient of 1, indicating its positive impact within the model. Each year, an increase in age corresponded to 1.87 times higher odds of curative services utilization (p-value <0.001). Children who had a dental visit before age one showed 4.36 times higher odds of curative services utilization, but this association was not statistically significant. Children reporting oral health needs in the past year exhibited significantly higher odds of curative services utilization by 54.77 times (p-value <0.001). Having dental insurance was associated with 2.85 times higher odds of curative services utilization, although this association was not statistically significant . The odds of preventive/consultation services utilization increased by 1.45 times for each increase in age by one year (p-value <0.001) and 1.36 times for each one-unit increase in knowledge (p-value = 0.00). Additionally, having a dental visit before the age of one increased the odds of preventive/consultation services utilization by 6.05 times (p-value = 0.04). Better socioeconomic status was also associated with higher odds of preventive/consultation services utilization (OR = 1.65, p-value = 0.03) . The education level of the household head directly predicted dental insurance (OR = 2.34, p-value <0.001) and socioeconomic status (robust coefficient = 0.90, p-value <0.001). The age of the child was directly associated with parent-perceived oral health needs in the past year (OR = 2.49, p-value <0.001) and indirectly linked to parent-perceived oral health (OR = 0.81, p-value <0.001). Additionally, better socioeconomic status was significantly associated with better parent-perceived oral health (OR = 1.46, p-value = 0.00). Causal network of dental services utilization in 7–12-year-old children In the behavior construct, brushing teeth had the most robust coefficient. None of the observed variables within the behavior construct had a significant effect. In the socioeconomic construct, living in less affluent districts had the most substantial contribution (robust coefficient = -1.33, p-value <0.001). The robust coefficient was 1 for income and was -0.82 for living in moderate districts (p-value = 0.00). The odds ratio of having a curative dental visit was 11.12 (p-value = 0.02) in children with a history of a dental visit before the age of one and 1.28 in those with a higher knowledge score (p-value = 0.03). Better socioeconomic status and having dental insurance was positively related to curative services utilization (OR = 2.53, p-value = 0.01 for socioeconomic status and OR = 4.17, p-value <0.001 for dental insurance). Parent-perceived oral health needs in the past year were associated with higher odds of curative services utilization (OR = 19.48, p-value <0.001) . A dental visit before the age of one led to an increase of 10.05 times in the odds of using preventive/consultation services (p-value = 0.02). Better oral health behavior was positively related to preventive/consultation dental services utilization (OR = 1.25, p-value = 0.04). Parent-perceived oral health needs in the last year had a statistically significant positive relationship with preventive/consultation services utilization (OR = 4.62, p-value <0.001) . The education level of the household head had a statistically significant positive association with dental insurance and socioeconomic status (robust coefficient = 0.74, p-value <0.001 for socioeconomic status and OR = 1.76, p-value <0.001 for dental insurance). The child’s age and socioeconomic status were indirectly linked to parent-perceived oral health needs (OR = 0.84, p-value = 0.01 for age and OR = 0.56, p-value = 0.02 for socioeconomic status); higher socioeconomic status was associated with better parent-perceived oral health (OR = 1.53, p-value <0.001). shows the ROC curve related to the model predictability of curative and preventive/consultation services utilization in two age groups. The area under the ROC curve (AUC) was 0.98 for curative services utilization and 0.97 for preventive/consultation services utilization in children aged 6 years old and younger. The AUC was 0.79 and 0.83 for curative and preventive/consultation services utilization in children aged 7 to 12, respectively. To achieve the sample size, 16258 calls were conducted, 5428 calls were answered, and 1322 calls led to the completion of the questionnaires. The sample consisted of 886 children aged 12 years old and younger, with 423 children aged 6 years and younger and 463 children aged 7–12 years old (mean = 6.71, SD = 3.32). Approximately 49.8% of the children were male . About 19% of the children lived in affluent districts, 28% in moderate districts, and 53% in lower affluent districts. Most household heads had an educational level ranging from a high school diploma to a bachelor’s degree (68.4%). Most children belonged to middle-income families (40.3%) . Sixty-four percent of children had basic insurance, and 32.8% had dental insurance. According to parental perception, the majority of children (37.6%) had good oral health, and 66.8% did not report oral health needs in the past year . The average score for parents’ oral health knowledge was 4.41 ± 1.40 (ranging from 0 to 7). Regarding oral health behaviors, 44.1% of children did not have a routine for tooth brushing, while 47.7% brushed their teeth once a day. Forty-four percent of children consumed snacks once or twice a day, and 25.3% consumed snacks three times a day or more. Additionally, 3.3% of children had a dental visit before age one . Overall, 57.2% of children did not use dental services, 22.1% used curative services, and 19.9% used preventive/consultation services . summarizes the frequency of dental services utilization based on predisposing, enabling, and need factors. The utilization rates for preventive/consultation and curative services were higher among children aged 7 to 12 compared to those aged six years and younger (20.7% versus 19.2% for preventive/consultation services and 29.9% versus 14% for curative services) . The percentage of children who did not use dental services in the past year was notably higher among children from households with lower levels of education among the household heads (67.7% for heads of household with less than a diploma, 64.8% for those with a high school diploma, 49.7% for those with an associate or bachelor’s degree, and 47.8% for those with a master’s degree or higher) . The pattern of dental service utilization had no significant difference in the two gender groups. The data in indicates that more regular tooth brushing correlates with a higher frequency of preventive/consultation services utilization (32.8% among those who brush their teeth more than once a day versus 12.9% among those who brush their teeth irregularly). Additionally, half of the children who had their first dental visit before the age of one received preventive/consultation services in the past year . No dental services utilization was more common in less affluent districts, with 62.2% of children falling into this category. Conversely, in affluent districts, the frequency of preventive/consultation services utilization was higher at 27%, and curative services utilization was even higher at 29.4% . Regarding family income, there was a noticeable decrease in the frequency of no utilization of dental services among higher income groups, with 41.2% reporting no utilization compared to 65.7% in the lowest income group. Conversely, the trend was reversed for preventive/consultation services, with 30.9% utilization in the higher-income group versus 13% in the lowest-income group . Most children whose parents perceived them to have oral health needs utilized curative services (52.9%). Conversely, among children without perceived oral health needs, the majority did not receive any dental services (70.1%). Moreover, the utilization of preventive/consultation services was more prevalent among children from families where parents perceived their oral health as better (27.6% in better-off families compared to 0% in the poorest families) . Figs and show the final generalized structural equation model. Tables and represent the GSEM results according to the two age groups. Tooth brushing exhibited the strongest contribution to oral health behavior, with a robust coefficient of 1. The snack consumption showed a robust coefficient of 0.05, which was not statistically significant (p-value = 0.47). Regarding the socioeconomic construct, living in less affluent districts had the most significant impact (robust coefficient = -1.24, p-value <0.001). Living in moderate districts also showed a statistically significant effect (robust coefficient = -0.77, p-value = 0.007). Income had a robust coefficient of 1, indicating its positive impact within the model. Each year, an increase in age corresponded to 1.87 times higher odds of curative services utilization (p-value <0.001). Children who had a dental visit before age one showed 4.36 times higher odds of curative services utilization, but this association was not statistically significant. Children reporting oral health needs in the past year exhibited significantly higher odds of curative services utilization by 54.77 times (p-value <0.001). Having dental insurance was associated with 2.85 times higher odds of curative services utilization, although this association was not statistically significant . The odds of preventive/consultation services utilization increased by 1.45 times for each increase in age by one year (p-value <0.001) and 1.36 times for each one-unit increase in knowledge (p-value = 0.00). Additionally, having a dental visit before the age of one increased the odds of preventive/consultation services utilization by 6.05 times (p-value = 0.04). Better socioeconomic status was also associated with higher odds of preventive/consultation services utilization (OR = 1.65, p-value = 0.03) . The education level of the household head directly predicted dental insurance (OR = 2.34, p-value <0.001) and socioeconomic status (robust coefficient = 0.90, p-value <0.001). The age of the child was directly associated with parent-perceived oral health needs in the past year (OR = 2.49, p-value <0.001) and indirectly linked to parent-perceived oral health (OR = 0.81, p-value <0.001). Additionally, better socioeconomic status was significantly associated with better parent-perceived oral health (OR = 1.46, p-value = 0.00). In the behavior construct, brushing teeth had the most robust coefficient. None of the observed variables within the behavior construct had a significant effect. In the socioeconomic construct, living in less affluent districts had the most substantial contribution (robust coefficient = -1.33, p-value <0.001). The robust coefficient was 1 for income and was -0.82 for living in moderate districts (p-value = 0.00). The odds ratio of having a curative dental visit was 11.12 (p-value = 0.02) in children with a history of a dental visit before the age of one and 1.28 in those with a higher knowledge score (p-value = 0.03). Better socioeconomic status and having dental insurance was positively related to curative services utilization (OR = 2.53, p-value = 0.01 for socioeconomic status and OR = 4.17, p-value <0.001 for dental insurance). Parent-perceived oral health needs in the past year were associated with higher odds of curative services utilization (OR = 19.48, p-value <0.001) . A dental visit before the age of one led to an increase of 10.05 times in the odds of using preventive/consultation services (p-value = 0.02). Better oral health behavior was positively related to preventive/consultation dental services utilization (OR = 1.25, p-value = 0.04). Parent-perceived oral health needs in the last year had a statistically significant positive relationship with preventive/consultation services utilization (OR = 4.62, p-value <0.001) . The education level of the household head had a statistically significant positive association with dental insurance and socioeconomic status (robust coefficient = 0.74, p-value <0.001 for socioeconomic status and OR = 1.76, p-value <0.001 for dental insurance). The child’s age and socioeconomic status were indirectly linked to parent-perceived oral health needs (OR = 0.84, p-value = 0.01 for age and OR = 0.56, p-value = 0.02 for socioeconomic status); higher socioeconomic status was associated with better parent-perceived oral health (OR = 1.53, p-value <0.001). shows the ROC curve related to the model predictability of curative and preventive/consultation services utilization in two age groups. The area under the ROC curve (AUC) was 0.98 for curative services utilization and 0.97 for preventive/consultation services utilization in children aged 6 years old and younger. The AUC was 0.79 and 0.83 for curative and preventive/consultation services utilization in children aged 7 to 12, respectively. Ensuring all populations access necessary health services is a key component of the Sustainable Development Goals (SDGs). This study’s findings highlight dental services utilization, defined as the use of dental services within a specific timeframe, as a complex and multifactorial phenomenon . Using Andersen’s behavioral model, we analyzed dental services utilization and its associated factors through a generalized structural equation model. Understanding these underlying factors can assist policymakers and health authorities in improving dental service utilization patterns within the population . In children aged six years old and younger, age and parent-perceived oral health needs were factors related to curative services utilization, while age and socioeconomic status were associated with preventive/consultation services utilization. In children aged 7 to 12, dental visits before age one, parent-perceived oral health needs, and socioeconomic status were associated with both curative and preventive/consultation services utilization. Additionally, dental insurance was related to curative services utilization in children aged 7 to 12 years. Among children aged six years old and younger, 66.2% had no dental visits, 13.9% utilized curative services, and 19.1% used preventive/consultation services in the last year. For children aged 7 to 12, 49% did not use dental services in the past year, 29.6% used curative services, and 20.5% used preventive/consultation services. A study conducted in China, an Asian developed country, showed that 45.2% of children aged 2 to 6 years old used dental services, with 24.3% utilizing preventive services . The frequency of dental services utilization among children aged 1–10 years was reported as 54.5% and 59.1% in West Virginia and Pennsylvania, respectively . These frequencies are comparable to our findings. In contrast, a study in Al-Madinah, Saudi Arabia, showed a higher utilization rate of 76.2% among children aged 9–12 years old, possibly due to free access to dental services in the country . A study on Canadian children indicated that 48.7% of children aged 1 to 4 and 90.3% of children aged 5 to 11 used preventive dental services in the past year, reflecting the publicly funded dental care for children in Canada . Predisposing factors As per the findings of our study, the utilization of dental services shows a significant increase with age among children aged six years and younger. This trend, supported by other evidence , can be attributed to the accumulation of oral health issues over time, leading to higher utilization rates as children grow older. Moreover, younger children may need more communication skills, making it easier for parents to accurately perceive their dental needs as they are older . Our study also revealed a positive association between age and parent-perceived oral health needs among children aged six years and younger, indicating that as children grew older, parents perceived more significant oral health needs and poorer oral health . However, this relationship was different in children aged 7 to 12; as age increased, there was a decrease in parent-perceived oral health needs, and no significant association was found between age and parent-perceived oral health . This finding explains the lack of a significant relationship between age and dental services utilization in this older age group. A critical reason for delaying dental visits is the need for a better understanding of the importance of early dental care . Previous studies have also underscored the role of knowledge in influencing dental services utilization . A study in central Mexico showed that although knowledge significantly affected curative dental service utilization, its association with preventive dental service utilization was insignificant in adolescents . Like this study, our study showed the partial effect of knowledge on dental service utilization. A higher knowledge score increased the odds of utilizing preventive/consultation services in children aged six and younger and curative services utilization in children aged 7 to 12. One reason for this finding could be that few children receive curative services in younger children and preventive services in older ones. Dental visits before the age of one increased the odds of preventive/consultation dental services utilization in both age groups. A systematic review by Bhaskar et al. indicated that dental visits before age one were associated with more preventive visits and fewer curative dental treatments . The increased odds of curative services utilization observed in our study among those who had dental visits before the age of one may reflect improved parental attitudes towards oral health and preserving children’s teeth. Parental attitudes towards oral health are closely linked to the frequency of dental treatments received . Oral health behavior significantly influenced preventive/consultation services utilization in children aged 7 to 12. A systematic review by Curi et al. highlighted that oral health behaviors such as tooth brushing and diet are critical determinants of dental services utilization, particularly preventive services in children aged 1 to 15 . In our study, this association was insignificant in the younger age group. It is worth noting that the frequency of tooth brushing and snack consumption in this age group was lower compared to older children. In our study, dental services utilization was similar in the two genders, so we did not add this variable in the final model. A study in India showed boys were more likely to use dental services in the past year . By contrast, as a result of Aghili et al. in Saudia Arabia and Medina-Solis et al. in Nicaragua, dental service utilization was more distributed among girls . The results regarding the effect of gender on dental service utilization were inconsistent and should be interpreted consciously. Different oral health statuses in two genders could be the reason for different dental service utilization. A meta-analysis in Iran found no difference between girls and boys in the DMFT/dmft index . Another predisposing variable was the household head’s education level, which positively correlated with socioeconomic status. Higher education provides better job opportunities, enhancing socioeconomic status . This finding is consistent with a study by Gao et al., which highlighted the association between poverty and education . Enabling factors According to our model, enabling factors not only directly influenced dental services utilization but also corresponded to individual needs. Among these factors, better socioeconomic status was associated with increased odds of utilizing preventive/consultation services in both age groups and curative services in children aged 7 to 12. Socioeconomic status encompasses cultural and economic factors that shape parents’ perceptions of oral health importance and behaviors, including dental service utilization, which aligns with Bourdieu’s sociological theory . Additionally, socioeconomic status broadens the choices for service delivery and subsequently enhances access . This finding aligns with a study in Brazil that indicated children’s economic status was associated with preventive and problem-based dental visits . In our study, although not statistically significant, the odds of curative services utilization increased with improving socioeconomic status among children aged six and younger. The cost of treatment is lower in early life, which may mitigate the influence of socioeconomic status in this age group . Our results showed that lower socioeconomic status was associated with higher odds of parent-perceived oral health needs and poorer parent-perceived oral health. A previous study also found that children from low-income families had more oral health needs and lower dental services utilization . Dental insurance, another enabling factor, showed no significant associations with the need component. Additionally, it was not related to dental services utilization except for curative services in children aged 7 to 12. Some studies have confirmed the role of insurance , whereas others have found no significant association . Factors such as the type of insurance (private or public), characteristics of insured groups, and the nature of services provided may deter the role of insurance in dental services utilization . Enabling factors predict potential access to dental services and the prerequisite of realized use . Socioeconomic status and insurance coverage are not the only determinants of enabling factors. According to the Andersen model, regular sources of care are also important and should be considered in future studies. Need factors Andersen argued that need is the most immediate predictor of dental services utilization . This aligns with our findings, which demonstrated that parent-perceived oral health needs ware the strongest predictor of curative services utilization in both age groups and preventive/consultation services in children aged 7 to 12. Previous studies have also highlighted the significant role of need in dental services utilization, whether perceived or evaluated . Indeed, the need motivates people to seek dental services . Parent-perceived oral health was another aspect considered in the need component. However, in our model, we did not find a significant relationship between parent-perceived oral health and dental services utilization. This finding is consistent with a study conducted in Al-Madinah, Saudi Arabia , which showed that dental services utilization in children was predicted by dental pain but not significantly associated with self-perceived oral health. This study was conducted via a phone survey, which may introduce limitations such as the inability to utilize visual cues to establish rapport or instances where respondents may not be prepared for the interview during the call. Implementing an effective communication process and designing a suitable framework for telephone interviews are crucial for ensuring efficient data collection using this method . Moreover, this method could not encompass individuals with no access to phones. With the extensive coverage of the landline (8200300 lines) and the considerable frequency of using cell phones in Tehran (72.6%), this issue was less troublesome . In addition to its limitations, this method offers several advantages. These include broader geographic coverage as conducted centrally, guaranteeing the anonymity and comfort of the interviewer and interviewee and facilitating faster data collection . Given these advantages, the challenges posed by the recent peak of the COVID-19 pandemic, the lower cost of telephone-based methods, and the significantly lower response rate of face-to-face methods in our society, we opted to employ this method for data collection in our study. Another study limitation was that the information was parent-reported, which could lead to recall bias. We try to mitigate this limitation using standard questions from previous studies and another standard questionnaire. The third limitation is the study’s cross-sectional design, which cannot prove the causality relationship between explanatory variables and dental service utilization. Nevertheless, using a proper sample size and structural equation modeling based on the Andersen model, our study has provided valuable results about the factors associated with preventive/consultation and curative services utilization separately across two age groups in all districts of Tehran. This categorization enabled us to discern the factors linked to these two types of dental care among age groups with distinct characteristics. Additionally, we employed a generalized structural equation model to construct a network illustrating the relationships among predictor variables, outcomes, and each other concurrently. These findings provide insights that can assist policymakers in targeting effective and modifiable determinants tailored to specific target populations. As per the findings of our study, the utilization of dental services shows a significant increase with age among children aged six years and younger. This trend, supported by other evidence , can be attributed to the accumulation of oral health issues over time, leading to higher utilization rates as children grow older. Moreover, younger children may need more communication skills, making it easier for parents to accurately perceive their dental needs as they are older . Our study also revealed a positive association between age and parent-perceived oral health needs among children aged six years and younger, indicating that as children grew older, parents perceived more significant oral health needs and poorer oral health . However, this relationship was different in children aged 7 to 12; as age increased, there was a decrease in parent-perceived oral health needs, and no significant association was found between age and parent-perceived oral health . This finding explains the lack of a significant relationship between age and dental services utilization in this older age group. A critical reason for delaying dental visits is the need for a better understanding of the importance of early dental care . Previous studies have also underscored the role of knowledge in influencing dental services utilization . A study in central Mexico showed that although knowledge significantly affected curative dental service utilization, its association with preventive dental service utilization was insignificant in adolescents . Like this study, our study showed the partial effect of knowledge on dental service utilization. A higher knowledge score increased the odds of utilizing preventive/consultation services in children aged six and younger and curative services utilization in children aged 7 to 12. One reason for this finding could be that few children receive curative services in younger children and preventive services in older ones. Dental visits before the age of one increased the odds of preventive/consultation dental services utilization in both age groups. A systematic review by Bhaskar et al. indicated that dental visits before age one were associated with more preventive visits and fewer curative dental treatments . The increased odds of curative services utilization observed in our study among those who had dental visits before the age of one may reflect improved parental attitudes towards oral health and preserving children’s teeth. Parental attitudes towards oral health are closely linked to the frequency of dental treatments received . Oral health behavior significantly influenced preventive/consultation services utilization in children aged 7 to 12. A systematic review by Curi et al. highlighted that oral health behaviors such as tooth brushing and diet are critical determinants of dental services utilization, particularly preventive services in children aged 1 to 15 . In our study, this association was insignificant in the younger age group. It is worth noting that the frequency of tooth brushing and snack consumption in this age group was lower compared to older children. In our study, dental services utilization was similar in the two genders, so we did not add this variable in the final model. A study in India showed boys were more likely to use dental services in the past year . By contrast, as a result of Aghili et al. in Saudia Arabia and Medina-Solis et al. in Nicaragua, dental service utilization was more distributed among girls . The results regarding the effect of gender on dental service utilization were inconsistent and should be interpreted consciously. Different oral health statuses in two genders could be the reason for different dental service utilization. A meta-analysis in Iran found no difference between girls and boys in the DMFT/dmft index . Another predisposing variable was the household head’s education level, which positively correlated with socioeconomic status. Higher education provides better job opportunities, enhancing socioeconomic status . This finding is consistent with a study by Gao et al., which highlighted the association between poverty and education . According to our model, enabling factors not only directly influenced dental services utilization but also corresponded to individual needs. Among these factors, better socioeconomic status was associated with increased odds of utilizing preventive/consultation services in both age groups and curative services in children aged 7 to 12. Socioeconomic status encompasses cultural and economic factors that shape parents’ perceptions of oral health importance and behaviors, including dental service utilization, which aligns with Bourdieu’s sociological theory . Additionally, socioeconomic status broadens the choices for service delivery and subsequently enhances access . This finding aligns with a study in Brazil that indicated children’s economic status was associated with preventive and problem-based dental visits . In our study, although not statistically significant, the odds of curative services utilization increased with improving socioeconomic status among children aged six and younger. The cost of treatment is lower in early life, which may mitigate the influence of socioeconomic status in this age group . Our results showed that lower socioeconomic status was associated with higher odds of parent-perceived oral health needs and poorer parent-perceived oral health. A previous study also found that children from low-income families had more oral health needs and lower dental services utilization . Dental insurance, another enabling factor, showed no significant associations with the need component. Additionally, it was not related to dental services utilization except for curative services in children aged 7 to 12. Some studies have confirmed the role of insurance , whereas others have found no significant association . Factors such as the type of insurance (private or public), characteristics of insured groups, and the nature of services provided may deter the role of insurance in dental services utilization . Enabling factors predict potential access to dental services and the prerequisite of realized use . Socioeconomic status and insurance coverage are not the only determinants of enabling factors. According to the Andersen model, regular sources of care are also important and should be considered in future studies. Andersen argued that need is the most immediate predictor of dental services utilization . This aligns with our findings, which demonstrated that parent-perceived oral health needs ware the strongest predictor of curative services utilization in both age groups and preventive/consultation services in children aged 7 to 12. Previous studies have also highlighted the significant role of need in dental services utilization, whether perceived or evaluated . Indeed, the need motivates people to seek dental services . Parent-perceived oral health was another aspect considered in the need component. However, in our model, we did not find a significant relationship between parent-perceived oral health and dental services utilization. This finding is consistent with a study conducted in Al-Madinah, Saudi Arabia , which showed that dental services utilization in children was predicted by dental pain but not significantly associated with self-perceived oral health. This study was conducted via a phone survey, which may introduce limitations such as the inability to utilize visual cues to establish rapport or instances where respondents may not be prepared for the interview during the call. Implementing an effective communication process and designing a suitable framework for telephone interviews are crucial for ensuring efficient data collection using this method . Moreover, this method could not encompass individuals with no access to phones. With the extensive coverage of the landline (8200300 lines) and the considerable frequency of using cell phones in Tehran (72.6%), this issue was less troublesome . In addition to its limitations, this method offers several advantages. These include broader geographic coverage as conducted centrally, guaranteeing the anonymity and comfort of the interviewer and interviewee and facilitating faster data collection . Given these advantages, the challenges posed by the recent peak of the COVID-19 pandemic, the lower cost of telephone-based methods, and the significantly lower response rate of face-to-face methods in our society, we opted to employ this method for data collection in our study. Another study limitation was that the information was parent-reported, which could lead to recall bias. We try to mitigate this limitation using standard questions from previous studies and another standard questionnaire. The third limitation is the study’s cross-sectional design, which cannot prove the causality relationship between explanatory variables and dental service utilization. Nevertheless, using a proper sample size and structural equation modeling based on the Andersen model, our study has provided valuable results about the factors associated with preventive/consultation and curative services utilization separately across two age groups in all districts of Tehran. This categorization enabled us to discern the factors linked to these two types of dental care among age groups with distinct characteristics. Additionally, we employed a generalized structural equation model to construct a network illustrating the relationships among predictor variables, outcomes, and each other concurrently. These findings provide insights that can assist policymakers in targeting effective and modifiable determinants tailored to specific target populations. Among children aged 7–12, enabling factors such as socioeconomic status and dental insurance, along with need factors such as parent-perceived oral health need, played a crucial role in dental services utilization. In contrast, predisposing factors such as age significantly contributed to dental care utilization for younger children. Need factors emerged as strong predictors of dental services utilization. Policymakers should prioritize investigating modifiable factors associated with dental care utilization within each age group. Addressing these factors can enhance healthy behaviors and promote oral health across the population. S1 File Questionnaire for utilization of oral health services based on Andersen’s behavioral model. (DOCX) S2 File Data of study. (SAV) |
Using a combination of quantitative culture, molecular, and infrastructure data to rank potential sources of fecal contamination in Town Creek Estuary, North Carolina | 4a568677-bceb-44e1-83e2-b040ace389c0 | 11029655 | Microbiology[mh] | Estuaries provide a variety of recreational, economic, and ecosystem services to the populations that surround and inhabit their waters. Acute and chronic contamination events in estuaries are becoming more prevalent and often stem from a combination of natural and human-driven events. A common but problematic contaminant in estuaries is fecal waste, which frequently increases in concentration following storm events from stormwater runoff . However, chronic fecal contamination in estuaries may often be a result of aging sewer structures, which can become overwhelmed due to infiltration, exfiltration, and inflow from increased use during seasonal tourism and due to weather including both typical precipitation conditions and extreme events such as hurricanes and tropical storms . In an effort to mitigate contamination, coastal municipalities are increasingly using stormwater control measures to reduce estuarine contamination, particularly for nutrients, sediment, and fecal pollution . However, stormwater control measures cannot account for the entirety of infrastructure-related contamination impacting estuaries. Extensive work along the coastal plain states of North Carolina and Virginia indicates that municipalities are seeking decision-making data that inform the prioritization of infrastructure repairs to reduce estuarine water quality degradation . Sewage and stormwater infrastructure in the United States (US) have been highlighted in recent years as being aged and in dire need of repair . Water infrastructure improvements are considered a priority by the US government, and funding for improvement projects was included in the passage of the 2021 Infrastructure Investment and Jobs Act . This law has resulted in the federal government awarding more than 50 billion USD to the US Environmental Protection Agency (U.S. EPA) to distribute to states, Tribes, and territories to improve water infrastructure including sewage systems . In February 2023, North Carolina Governor Roy Cooper approved 462.9 million USD in funding for 249 infrastructure projects in 80 communities state-wide . Many of these funds have been earmarked for specific large-scale aging and remediation projects in metropolitan centers, but there is an opportunity to develop infrastructure improvements in smaller coastal regions receiving large numbers of recreational water users . Town Creek Estuary (TCE) is a popular recreational estuary located in Beaufort, North Carolina, a small coastal town with a population of approximately 4,500 full time residents. Despite its small size, the Town of Beaufort and surrounding Carteret County see an annual visitor count exceeding one million individuals due to the abundance of water-related recreational activities in the region (15). Although tourism drives the local economy, coastal development required to support visitors has increased stress on aging sewer infrastructure and area septic systems, which in turn threatens estuarine ecosystems and water quality . Poor estuarine water quality increases the potential risk of human exposure to harmful contaminants during recreation, which is critical in places such as Beaufort that rely on tourism. Waters contaminated with fecal material are estimated to cause 170 million enteric and respiratory illnesses annually . The Town of Beaufort has a long history of collaborating with local researchers to identify contamination and develop mitigation strategies to preserve water quality . From these studies and collaborations with the town, we have learned that Beaufort has an extensive underground network of sewer and stormwater infrastructure, with sewer pipe construction dating back to 1969. Sewer pipe material ranges from cured-in-place pipe (CIPP), ductile iron pipe (DIP), polyvinyl chloride (PVC), truss, or vitrified clay (VC). Studies have shown that sewer pipe durability declines with age with exponential declines in durability and increases in corrosion occurring by 50 years . Moreover, the town has several stormwater outflows that drain into TCE. Stormwater runoff has been a consistent non-point source of microbial contamination nationwide . High levels of biological contaminants, including fecal indicator bacteria (FIB) have been directly related to disease outbreaks in recreational swimming areas . With multiple potential sources of fecal contamination in TCE, a thorough analysis of the estuary is crucial to protect water quality, preserve the local economy, and ensure public safety. The primary objective of this study was to design an estuarine water quality monitoring program and establish the use of a ranking approach for identifying locations of potentially compromised stormwater and sewage infrastructure in a coastal town. To accomplish this, we developed a creek-estuary sampling transect across ten sites spanning from creek headwaters to a prominent recreation use location (junior sailing camp, maritime museum, and marina) and used GIS data to identify the infrastructure along the transect. We then integrated infrastructure data with water quality and microbial data to develop a ranking approach to prioritize potential areas in need of review and repair. This comprehensive approach of combining microbial source tracking, water quality data, and infrastructure data can be used by town managers to prioritize infrastructure and stormwater projects to improve estuarine water quality.
Sample site selection TCE is located within the city limits of Beaufort, North Carolina, and encompasses approximately 0.36 square kilometers. The estuary provides a variety of water recreation activities including boating, fishing, and swimming, and is host to a summer sailing camp. To assess fecal contamination along the estuary and creek headwaters, ten sampling sites were selected ( and ). Sites were selected due to variable proximity to a) stormwater outfalls and sewer infrastructure, b) septic systems, c) lift stations, d) marinas, e) estuarine marsh, and f) down-channel from possible contamination sources. Site 5, located within the estuarine marsh, was used as a control because marsh habitat is known to naturally filter and attenuate contaminants . Sample collection Grab samples were collected in acid-washed 1 L HDPE bottles (ThermoFisher Scientific, Waltham, MA) at each of the ten sites immediately following high tide. Samples were collected at high tide to capture the flushing of potential contaminants from non-point sources. Each bottle was washed with the surrounding water three times before a sample was taken from an undisturbed section of water . Samples were collected on eight dates between August 6, 2021, and October 11, 2021. At the location where each grab sample was collected, a YSI-EXO2 multi-parameter water quality sonde (YSI Inc./Xylem Inc., Yellow Springs, OH) was deployed just below the surface of the water, ensuring all probes were completely submerged, to measure water temperature (˚C), atmospheric pressure (mmHg), dissolved oxygen , specific conductance (μS-cm), salinity (ppt), and turbidity (FNU). Samples were held at ambient temperature until they were transported to the laboratory for processing. All samples were processed within two hours of collection. Using a dissolved oxygen conversion calculator produced by the University of Minnesota Natural Resources Research Institute and approved by the U.S. EPA, dissolved oxygen saturation was converted to mg/L . Local environmental conditions Tide height was recorded in meters using daily historical tide charts measured at the Duke Marine Lab, station id: 8656483, in Beaufort, NC, maintained by the National Oceanic and Atmospheric Administration (NOAA) . The base tide height was calculated by taking the average of all the high tide data provided by NOAA at the Duke Marine Lab from the first to the last sampling date. Any collection date with a tide height higher than the base represents a day with a larger-than-normal tidal influence. The previous 24 and 72 hours of precipitation in centimeters (cm) was determined by referencing reports gathered by the Community Collaborative Rain, Hail, and Snow Network at site NC-CR-139: Beaufort 0.5 W . Any sampling date with measurable precipitation during both 24 hours and 72 hours prior was deemed a wet weather event . Both precipitation measurements were used as several studies have shown that precipitation three days prior to sample collection has an impact on the concentration of microorganisms . Wind speed and direction were collected from NOAA’s meteorological observation dataset recorded at the Duke Marine Lab, station id: 8656483, in Beaufort, NC . Wind speed was reported in meters per second (m/s) and the wind direction was reported in degrees from true North. The wind direction was then converted to a cardinal direction by referencing the conversion chart provided by the University of Northern Iowa . The base wind speed and direction were calculated by averaging the available data for both parameters from the first sampling date to the last. Any sampling date with a wind speed higher than the base represents a day with a larger-than-normal wind influence. Environmental condition data is available in . Culture-based methods for determining FIB concentration Colilert-18® and Enterolert ™ kits from IDEXX Laboratories (Westbrook, ME) were used to assess the most probable number (MPN) of total coliforms (TC), Escherichia coli (EC), and Enterococcus spp . (ENT) bacteria in each water sample following the manufacturer’s instructions. Briefly, a 1:10 dilution of each water sample was prepared by combining 90 mL of deionized water, 10 mL of raw sample, and respective test media into 120 mL bottles containing sodium thiosulfate (non-fluorescing polystyrene, IDEXX). The bottles were mixed by inversion for 30–60 seconds until fully homogenized and then placed on the bench top for a minimum of three minutes to ensure no bubbles were present. The solutions were then poured into individual IDEXX Quanti®-Tray 2000’s and sealed using a Quanti® - Tray Sealer PLUS (IDEXX). The trays containing Colilert-18® media were incubated for 18 hours at 35°C while the trays with Enterolert™ media were incubated for 24 hours at 41°C. After the appropriate incubation period, the trays were removed and analyzed. After quantifying the number of positive wells for each tray and assay, the MPN per 100 mL and 95% confidence interval were generated for TC, EC , and ENT using the IDEXX MPN calculator. A solution of 1X PBS was prepared as a method blank and analyzed with each batch of samples that were processed to ensure there was no contamination during processing. Each grab sample was tested in duplicate, and the MPN values that were generated from each Quanti®-Tray 2000 were averaged to generate a mean value for each sampling site for each sampling event. DNA extraction and droplet digital PCR (ddPCR) for quantitative microbial source tracking (qMST) Water samples collected in the field were filtered for DNA within two hours of collection. Duplicate 150 mL water samples were vacuum filtered to dryness using 47 mm diameter polycarbonate filters with a 0.4 μm pore size (HTTP, Millipore, Bedford, MA) on a six-place filtration manifold, and vacuum pump apparatus. Following filtration, filters were placed into DNase/RNase free microcentrifuge tubes using sterile forceps and stored at -80°C until downstream processing for a maximum of 3 months. DNA was extracted from one filter per sample following lysis with 1 mL easyMag® Lysis buffer (BioMérieux, Durham, NC) containing 7.4x10 5 copies of the extraction recovery control, gyrA , from a haloalkaliphilic archaeon. The lysed samples were incubated for 10 minutes at room temperature followed by total nucleic acid extraction using an automated magnetic particle analyzer (KingFisher™Flex, Thermo Fisher Scientific, Waltham, MA) and easyMag® NucliSENS® reagents (BioMérieux, Durham, NC), and eluted in 100 μL of Buffer AE (19077, QIAGEN, Germantown, MD). Specific details for the automatic extraction method can be found in Beattie et al . 2022 . A duplexed PCR mastermix targeting HF183 and the gyrA gene were created by adding 0.9 nM of the respective forward and reverse primers, 0.25 nM of the respective probes, 12.5 μL of ddPCR™ 2X Supermix for Probes (nodUTP, Bio-Rad Laboratories), 5 μL of DNA, and nuclease free water to a final reaction volume of 25 μL. Each sample was run in duplicate. Primers and probes were purchased from LCG Bioresearch (Petaluma, CA), and the sequences are shown in for the HF183 assay ; sequences for the haloalkaliphilic archaeon gyrA assay were kindly provided by John Griffith (Southern California Coastal Water Research Project, Costa Mesa, CA) and are in preparation for publication . In this study, gyr A was spiked into the extraction buffer and was used to assess both extraction recovery and inhibition. Positive HF183 controls (see ), no-extraction controls, and no template controls were included with each assay plate in addition to method blanks from each sampling date. No extraction controls consisted of a sterile 47 mm polycarbonate filter (0.4 μM pore size) extracted using the same method as sample filters. These samples were then analyzed using ddPCR using the same method as sample filters. No-template controls consisted of PCR master mix containing nuclease free water instead of sample DNA. Additional MIQE details can be found in . Twenty μL of the PCR mastermix and sample were pipetted into sample wells of the DG8™ Cartridge (Bio-Rad,) using a manual 8-channel pipette (L8-50XLS+, Rainin, Oakland, CA) followed by the addition of 70 μL of Droplet Generation Oil for Probes (Bio-Rad) to the oil wells. The cartridges were covered with DG8™ Gaskets (Bio-Rad) and processed in a manual Droplet Generator (Bio-Rad). The droplets were gently transferred to a semi-skirted 96-well PCR plate (mTEC, Eppendorf, Framingham, MA) using a manual 8-channel pipette. The PCR plate was sealed with pierceable foil (Bio-Rad) using a PX1™ PCR Plate Sealer (Bio-Rad). The PCR plate was placed in a C1000 Touch™ Thermal Cycler (Bio-Rad) and amplification was performed with the following temperature profile: 10 min at 95°C for initial denaturation, 40 cycles of 95°C for 30 s, and 58°C for 60 s with a ramp rate of 2°C/s, followed by 98°C for 10 min, then an indefinite hold at 4˚C (Zhu et al. 2020). After PCR cycling was complete, the plate was placed in a QX200™ instrument (Bio-Rad) and droplets were analyzed according to manufacturer’s instructions for 6-FAM™/HEX™. Data acquisition and analysis were performed with QuantaSoft™ v. 1.7 (Bio-Rad). The fluorescence amplitude threshold, distinguishing positive from negative droplets, was set manually by the analyst at the midpoint between the average baseline fluorescence amplitude of the negative droplet cluster and the positive droplet cluster . The same threshold was applied to all the wells of one PCR plate. Measurement results of single PCR wells were excluded if the total number of accepted droplets was <10,000 or the average fluorescence amplitudes of positive or negative droplets were clearly different from those of the other wells on the plate, in accordance with manufacturer guidelines. The QuantaSoft software uses the Poisson distribution to quantitate the concentration of targets based on the numbers of positive and accepted droplets in each well. Samples were quantified in duplicate using and replicate wells were merged. A sample was considered positive and quantifiable if the minimum threshold of three positive droplets was met. Geographic information systems All analysis on sewer and stormwater infrastructure was done using ArcGIS Pro 2.7.0 (Esri Inc., Redlands, California). Shapefiles of the sewer and stormwater infrastructure were provided by the Beaufort, NC, Town Engineer (2021). The attribute table of each shapefile included information on each pipe’s diameter, length, material, and date of construction. The attribute tables also included the location and date of construction of nearby lift stations and stormwater outflows. The projected coordinate system used for all maps was NAD 1983 ft US and all shapefiles were overlaid on the Hybrid Reference Layer base map (Esri Inc, Redlands, California). All mapping figures were made using National Agriculture Imagery Program (NAIP) aerial imagery from the United States Geological Society Earth Resources Observatory and Science (USGS EROS) database. Models for ranking sources of fecal contamination Two models were developed to rank potential sources of fecal contamination in TCE: an equal weight model and a variable weight model. Six parameters were included from each site: mean EC concentration, mean ENT concentration, mean HF183 concentration, percentage of pipes in a 400-m radius made of vitrified clay, percentage of pipes in a 400-m radius over 50 years of age, and the inverse of the distance to the nearest stormwater pipe in a 400-m radius. The parameters were ranked at each site, with 10 being the highest concentration, percentage, or inverse distance of the parameter and 1 being the lowest. For the equal weighting model, each parameter was given equal weight, thus the overall site ranking was based on the total sum of the rank of each parameter at each site. See for additional details. For the variable weighting model, empirical knowledge was used to weight parameters based on whether or not the parameter indicated the presence of fecal contamination; i.e., measured markers of fecal contamination (FIB, EC, ENT, HF183) were assigned a higher weight. Those parameters directly linked to fecal contamination (such as HF183 and FIB concentrations) were given a higher weight in the model. Parameters were weighted 1–6, with 6 being the highest weight, based on whether or not the presence of the parameter indicated the presence of human fecal contamination, with those parameters directly linking fecal contamination to human sources ranked the highest weight. If a parameter had an equal likelihood of indicating the presence of fecal contamination as another parameter, they were given the same weight and the values of subsequent weight were adjusted accordingly; see the weights given to each parameter using the variable weight model in . For both models, each parameter weight was multiplied by the parameter rank at each site, and a value was calculated. The value of the six parameters at each site was summed, and the site with the highest total sum was considered the highest potential source of fecal contamination, represented by a final rank of 1. Additional details can be found in . Statistical analysis Any quantified values for FIB MPN per 100 mL that were below and above detection limits containing a “<” or “>” symbol, as assigned by the IDEXX MPN calculator, were reassigned with the next respective numerical value for use in statistical calculations . For example, a value of “<10” was changed to 9 and a value of “>24196” was changed to 24197. Additionally, all non-detect HF183 concentrations were scored as 0 for statistical analyses. All FIB and microbial gene marker concentration data were log 10 transformed to reduce the skewness as the Shapiro-Wilks test determined the data was not normally distributed (p < 0.05). All statistical tests were performed at a significance level of p = 0.05 and a confidence level of 95%. Differences between site concentrations of the log 10 transformed FIB, molecular qMST targets, and environmental parameters were determined by using Kruskal-Wallis test for non-parametric data. If significant differences were identified, the Dunn test was used to determine which collection sites and dates differed significantly. The non-parametric Spearman’s rank correlation test was used to evaluate correlations between FIB, microbial gene marker, environmental data, and sewer infrastructure; the strength of the correlation is denoted by the Spearman’s rank correlation coefficient, r s . All statistical tests were conducted using R software (R Core Team, Vienna, Austria) in RStudio (Rstudio Team, Boston, MA) using the tidyverse package , dpylr package , car package , corrplot package , and the lme4 package . All figures were created using the ggplot2 package .
TCE is located within the city limits of Beaufort, North Carolina, and encompasses approximately 0.36 square kilometers. The estuary provides a variety of water recreation activities including boating, fishing, and swimming, and is host to a summer sailing camp. To assess fecal contamination along the estuary and creek headwaters, ten sampling sites were selected ( and ). Sites were selected due to variable proximity to a) stormwater outfalls and sewer infrastructure, b) septic systems, c) lift stations, d) marinas, e) estuarine marsh, and f) down-channel from possible contamination sources. Site 5, located within the estuarine marsh, was used as a control because marsh habitat is known to naturally filter and attenuate contaminants .
Grab samples were collected in acid-washed 1 L HDPE bottles (ThermoFisher Scientific, Waltham, MA) at each of the ten sites immediately following high tide. Samples were collected at high tide to capture the flushing of potential contaminants from non-point sources. Each bottle was washed with the surrounding water three times before a sample was taken from an undisturbed section of water . Samples were collected on eight dates between August 6, 2021, and October 11, 2021. At the location where each grab sample was collected, a YSI-EXO2 multi-parameter water quality sonde (YSI Inc./Xylem Inc., Yellow Springs, OH) was deployed just below the surface of the water, ensuring all probes were completely submerged, to measure water temperature (˚C), atmospheric pressure (mmHg), dissolved oxygen , specific conductance (μS-cm), salinity (ppt), and turbidity (FNU). Samples were held at ambient temperature until they were transported to the laboratory for processing. All samples were processed within two hours of collection. Using a dissolved oxygen conversion calculator produced by the University of Minnesota Natural Resources Research Institute and approved by the U.S. EPA, dissolved oxygen saturation was converted to mg/L .
Tide height was recorded in meters using daily historical tide charts measured at the Duke Marine Lab, station id: 8656483, in Beaufort, NC, maintained by the National Oceanic and Atmospheric Administration (NOAA) . The base tide height was calculated by taking the average of all the high tide data provided by NOAA at the Duke Marine Lab from the first to the last sampling date. Any collection date with a tide height higher than the base represents a day with a larger-than-normal tidal influence. The previous 24 and 72 hours of precipitation in centimeters (cm) was determined by referencing reports gathered by the Community Collaborative Rain, Hail, and Snow Network at site NC-CR-139: Beaufort 0.5 W . Any sampling date with measurable precipitation during both 24 hours and 72 hours prior was deemed a wet weather event . Both precipitation measurements were used as several studies have shown that precipitation three days prior to sample collection has an impact on the concentration of microorganisms . Wind speed and direction were collected from NOAA’s meteorological observation dataset recorded at the Duke Marine Lab, station id: 8656483, in Beaufort, NC . Wind speed was reported in meters per second (m/s) and the wind direction was reported in degrees from true North. The wind direction was then converted to a cardinal direction by referencing the conversion chart provided by the University of Northern Iowa . The base wind speed and direction were calculated by averaging the available data for both parameters from the first sampling date to the last. Any sampling date with a wind speed higher than the base represents a day with a larger-than-normal wind influence. Environmental condition data is available in .
Colilert-18® and Enterolert ™ kits from IDEXX Laboratories (Westbrook, ME) were used to assess the most probable number (MPN) of total coliforms (TC), Escherichia coli (EC), and Enterococcus spp . (ENT) bacteria in each water sample following the manufacturer’s instructions. Briefly, a 1:10 dilution of each water sample was prepared by combining 90 mL of deionized water, 10 mL of raw sample, and respective test media into 120 mL bottles containing sodium thiosulfate (non-fluorescing polystyrene, IDEXX). The bottles were mixed by inversion for 30–60 seconds until fully homogenized and then placed on the bench top for a minimum of three minutes to ensure no bubbles were present. The solutions were then poured into individual IDEXX Quanti®-Tray 2000’s and sealed using a Quanti® - Tray Sealer PLUS (IDEXX). The trays containing Colilert-18® media were incubated for 18 hours at 35°C while the trays with Enterolert™ media were incubated for 24 hours at 41°C. After the appropriate incubation period, the trays were removed and analyzed. After quantifying the number of positive wells for each tray and assay, the MPN per 100 mL and 95% confidence interval were generated for TC, EC , and ENT using the IDEXX MPN calculator. A solution of 1X PBS was prepared as a method blank and analyzed with each batch of samples that were processed to ensure there was no contamination during processing. Each grab sample was tested in duplicate, and the MPN values that were generated from each Quanti®-Tray 2000 were averaged to generate a mean value for each sampling site for each sampling event.
Water samples collected in the field were filtered for DNA within two hours of collection. Duplicate 150 mL water samples were vacuum filtered to dryness using 47 mm diameter polycarbonate filters with a 0.4 μm pore size (HTTP, Millipore, Bedford, MA) on a six-place filtration manifold, and vacuum pump apparatus. Following filtration, filters were placed into DNase/RNase free microcentrifuge tubes using sterile forceps and stored at -80°C until downstream processing for a maximum of 3 months. DNA was extracted from one filter per sample following lysis with 1 mL easyMag® Lysis buffer (BioMérieux, Durham, NC) containing 7.4x10 5 copies of the extraction recovery control, gyrA , from a haloalkaliphilic archaeon. The lysed samples were incubated for 10 minutes at room temperature followed by total nucleic acid extraction using an automated magnetic particle analyzer (KingFisher™Flex, Thermo Fisher Scientific, Waltham, MA) and easyMag® NucliSENS® reagents (BioMérieux, Durham, NC), and eluted in 100 μL of Buffer AE (19077, QIAGEN, Germantown, MD). Specific details for the automatic extraction method can be found in Beattie et al . 2022 . A duplexed PCR mastermix targeting HF183 and the gyrA gene were created by adding 0.9 nM of the respective forward and reverse primers, 0.25 nM of the respective probes, 12.5 μL of ddPCR™ 2X Supermix for Probes (nodUTP, Bio-Rad Laboratories), 5 μL of DNA, and nuclease free water to a final reaction volume of 25 μL. Each sample was run in duplicate. Primers and probes were purchased from LCG Bioresearch (Petaluma, CA), and the sequences are shown in for the HF183 assay ; sequences for the haloalkaliphilic archaeon gyrA assay were kindly provided by John Griffith (Southern California Coastal Water Research Project, Costa Mesa, CA) and are in preparation for publication . In this study, gyr A was spiked into the extraction buffer and was used to assess both extraction recovery and inhibition. Positive HF183 controls (see ), no-extraction controls, and no template controls were included with each assay plate in addition to method blanks from each sampling date. No extraction controls consisted of a sterile 47 mm polycarbonate filter (0.4 μM pore size) extracted using the same method as sample filters. These samples were then analyzed using ddPCR using the same method as sample filters. No-template controls consisted of PCR master mix containing nuclease free water instead of sample DNA. Additional MIQE details can be found in . Twenty μL of the PCR mastermix and sample were pipetted into sample wells of the DG8™ Cartridge (Bio-Rad,) using a manual 8-channel pipette (L8-50XLS+, Rainin, Oakland, CA) followed by the addition of 70 μL of Droplet Generation Oil for Probes (Bio-Rad) to the oil wells. The cartridges were covered with DG8™ Gaskets (Bio-Rad) and processed in a manual Droplet Generator (Bio-Rad). The droplets were gently transferred to a semi-skirted 96-well PCR plate (mTEC, Eppendorf, Framingham, MA) using a manual 8-channel pipette. The PCR plate was sealed with pierceable foil (Bio-Rad) using a PX1™ PCR Plate Sealer (Bio-Rad). The PCR plate was placed in a C1000 Touch™ Thermal Cycler (Bio-Rad) and amplification was performed with the following temperature profile: 10 min at 95°C for initial denaturation, 40 cycles of 95°C for 30 s, and 58°C for 60 s with a ramp rate of 2°C/s, followed by 98°C for 10 min, then an indefinite hold at 4˚C (Zhu et al. 2020). After PCR cycling was complete, the plate was placed in a QX200™ instrument (Bio-Rad) and droplets were analyzed according to manufacturer’s instructions for 6-FAM™/HEX™. Data acquisition and analysis were performed with QuantaSoft™ v. 1.7 (Bio-Rad). The fluorescence amplitude threshold, distinguishing positive from negative droplets, was set manually by the analyst at the midpoint between the average baseline fluorescence amplitude of the negative droplet cluster and the positive droplet cluster . The same threshold was applied to all the wells of one PCR plate. Measurement results of single PCR wells were excluded if the total number of accepted droplets was <10,000 or the average fluorescence amplitudes of positive or negative droplets were clearly different from those of the other wells on the plate, in accordance with manufacturer guidelines. The QuantaSoft software uses the Poisson distribution to quantitate the concentration of targets based on the numbers of positive and accepted droplets in each well. Samples were quantified in duplicate using and replicate wells were merged. A sample was considered positive and quantifiable if the minimum threshold of three positive droplets was met.
All analysis on sewer and stormwater infrastructure was done using ArcGIS Pro 2.7.0 (Esri Inc., Redlands, California). Shapefiles of the sewer and stormwater infrastructure were provided by the Beaufort, NC, Town Engineer (2021). The attribute table of each shapefile included information on each pipe’s diameter, length, material, and date of construction. The attribute tables also included the location and date of construction of nearby lift stations and stormwater outflows. The projected coordinate system used for all maps was NAD 1983 ft US and all shapefiles were overlaid on the Hybrid Reference Layer base map (Esri Inc, Redlands, California). All mapping figures were made using National Agriculture Imagery Program (NAIP) aerial imagery from the United States Geological Society Earth Resources Observatory and Science (USGS EROS) database.
Two models were developed to rank potential sources of fecal contamination in TCE: an equal weight model and a variable weight model. Six parameters were included from each site: mean EC concentration, mean ENT concentration, mean HF183 concentration, percentage of pipes in a 400-m radius made of vitrified clay, percentage of pipes in a 400-m radius over 50 years of age, and the inverse of the distance to the nearest stormwater pipe in a 400-m radius. The parameters were ranked at each site, with 10 being the highest concentration, percentage, or inverse distance of the parameter and 1 being the lowest. For the equal weighting model, each parameter was given equal weight, thus the overall site ranking was based on the total sum of the rank of each parameter at each site. See for additional details. For the variable weighting model, empirical knowledge was used to weight parameters based on whether or not the parameter indicated the presence of fecal contamination; i.e., measured markers of fecal contamination (FIB, EC, ENT, HF183) were assigned a higher weight. Those parameters directly linked to fecal contamination (such as HF183 and FIB concentrations) were given a higher weight in the model. Parameters were weighted 1–6, with 6 being the highest weight, based on whether or not the presence of the parameter indicated the presence of human fecal contamination, with those parameters directly linking fecal contamination to human sources ranked the highest weight. If a parameter had an equal likelihood of indicating the presence of fecal contamination as another parameter, they were given the same weight and the values of subsequent weight were adjusted accordingly; see the weights given to each parameter using the variable weight model in . For both models, each parameter weight was multiplied by the parameter rank at each site, and a value was calculated. The value of the six parameters at each site was summed, and the site with the highest total sum was considered the highest potential source of fecal contamination, represented by a final rank of 1. Additional details can be found in .
Any quantified values for FIB MPN per 100 mL that were below and above detection limits containing a “<” or “>” symbol, as assigned by the IDEXX MPN calculator, were reassigned with the next respective numerical value for use in statistical calculations . For example, a value of “<10” was changed to 9 and a value of “>24196” was changed to 24197. Additionally, all non-detect HF183 concentrations were scored as 0 for statistical analyses. All FIB and microbial gene marker concentration data were log 10 transformed to reduce the skewness as the Shapiro-Wilks test determined the data was not normally distributed (p < 0.05). All statistical tests were performed at a significance level of p = 0.05 and a confidence level of 95%. Differences between site concentrations of the log 10 transformed FIB, molecular qMST targets, and environmental parameters were determined by using Kruskal-Wallis test for non-parametric data. If significant differences were identified, the Dunn test was used to determine which collection sites and dates differed significantly. The non-parametric Spearman’s rank correlation test was used to evaluate correlations between FIB, microbial gene marker, environmental data, and sewer infrastructure; the strength of the correlation is denoted by the Spearman’s rank correlation coefficient, r s . All statistical tests were conducted using R software (R Core Team, Vienna, Austria) in RStudio (Rstudio Team, Boston, MA) using the tidyverse package , dpylr package , car package , corrplot package , and the lme4 package . All figures were created using the ggplot2 package .
Water quality Water quality parameters, such as salinity, dissolved oxygen, and temperature, varied throughout the TCE and were positively associated with precipitation patterns (n = 80, and ). Channel salinity (ppt) ranged from 0.2 in creek headwaters (Site 1) to 36.4 in the open channel (Site 10). Salinity differed significantly by site (p < 0.05) under all conditions. Dissolved oxygen levels were lower at sites within creek headwaters (Site 1, mean saturation = 43.3%, equivalent to 3.6 mg/L) compared to the open channel (Site 10, mean saturation = 82.2% equivalent to 6.6 mg/L) over the 10-week study period. Water temperature was more stable across sites, with limited variability over the 10 weeks study (min. 24.5°C recorded at Site 1 and max 27.6°C recorded at Site 5). Enterococcus spp . concentrations ranged from 9 to 24,197 MPN per 100 mL (95% CI [0, ] and [16304, 47161] respectively), and EC concentrations ranged from 77.5 to 24,197 MPN per 100 mL (95% CI [32.5, 146.0] and [NA, Infinite] respectively) across all sites over the course of the study ( and Tables). The highest measured concentrations of FIB were observed in creek headwaters and locations adjacent to stormwater drains while lower concentrations were detected in the open, deeper, more tidally influenced reaches of the estuary. For example, Site 1 in the creek head waters had high FIB concentrations with ENT between 260.5 to 24,196.5 MPN per 100 mL (95% CI [162.5, 392.5] and [16304, 47161] respectively) and EC between 381.5 to 6,330.5 MPN per 100 mL (95% CI [257, 547] and [4142, 9108] respectively). In contrast, Site 10 in the open channel had ENT concentrations ranging from 9 to 20.5 MPN per 100 mL (95% CI [0, 37] and [4, 72] respectively) and EC concentrations ranging from 86.5 to 459 MPN per 100 mL (95% CI [41.5, 155.5] and [323, 630.5] respectively, – Tables). Fecal indicator bacteria concentrations trended higher across the transect after wet weather events (n = 40) when compared to dry weather (n = 40). Precipitation patterns were found to significantly and positively correlate with measured concentrations of ENT (previous 24 hours: r s = 0.50, p < 0.001 and previous 72 hours: r s = 0.52, p < 0.001), with mean concentrations of ENT following a wet weather event equal to 1,053.1 MPN per 100 mL (sd = 3877.4 MPN per 100 mL) compared to dry weather with mean concentrations of 104.4 MPN per 100 mL (sd = 299.4 MPN per 100 mL). The mean concentration of ENT during wet weather events exceeded the U.S. EPA and North Carolina Department of Environmental Quality (NCDEQ) recreational water quality standard of 104 MPN per 100 mL . Enterococcus concentrations were found to significantly and positively correlate with wet weather events (p <0.01). Additionally, ENT concentrations were significantly and negatively correlated with salinity (r s = -0.48, p < 0.001), dissolved oxygen (r s = - 0.74, p < 0.001), and water temperature (r s = - 0.68, p < 0.001) ( and ). In contrast to ENT concentrations, EC exceeded the U.S. EPA recreational water quality standard of 320 MPN per 100 mL during both wet and dry conditions with a wet weather mean concentration of 2644.7 MPN per 100 mL (sd = 6364.3 MPN per 100 mL) and a dry weather mean of 795.6 MPN per 100 mL (sd = 1404.6 MPN per 100 mL) across sites. Of the total samples collected during wet weather events (n = 40), 23 (57.5%) exceeded the U.S. EPA standard for EC and 22 (55%) exceeded the U.S. EPA standard for ENT compared to dry weather when 18 (45%) exceeded the EC standard and 5 (12.5%) exceeded the ENT standard. E. coli concentrations also had a significant and negative correlation with salinity (r s = -0.25, p < 0.05), dissolved oxygen (r s = -0.47, p < 0.001), and water temperature (r s = -0.32, p < 0.05), although these correlations were not as strong as with ENT concentrations ( and ). Additionally, there was no significant relationship between wet/dry weather events and EC concentrations. No FIB species were detected in any of the method blanks. Quantitative microbial source tracking To identify if the source of fecal contamination observed in the TCE was of human origin, the human host-associated marker HF183 was measured in all samples and eight field blanks (one per sampling event). HF183 was detected in six of 80 samples (7.5% detection rate) with four of those detections following wet weather events; HF183 was not detected in any field blank . Concentrations ranged from a mean of 33.5 copies per 100 mL during dry weather (sd = 7.4 copies per 100 mL) to 108.3 copies per 100 mL during wet weather (sd = 36.9 copies per 100 mL); however, overall detection in the estuary was low. Each assay plate included three no extraction controls, three no template controls, and three positive controls in addition to the method blank samples from each collection event. HF183 was detected in each positive control and was not detected in any negative control or method blank, as expected. Fluorescence plots did not indicate assay inhibition in study samples due to the lack of partial amplification (rain) and the tight clustering of positive fluorescent signal observed in our fluorescence plots . Sewer and stormwater infrastructure Stormwater and sewer pipe age, material type, and location were assessed using GIS data provided from the Town of Beaufort, NC. The GIS infrastructure layers were updated in the Town of Beaufort in Fall of 2021, providing an excellent opportunity to use the most updated information for this study. Beaufort, NC, has approximately 53,000 meters of underground sewer pipe. Along the perimeter of the TCE (approximately 400 meters), most sewer pipes were constructed in 1969 (52.3%) and 2008 (32%). Pipe material throughout the Town of Beaufort includes CIPP, DIP, PVC, truss, and VC. The VC pipes were all constructed in 1969. Around the perimeter of the TCE, most pipes are constructed out of VC (30.8%) and CIPP (25%). Additionally, there is one lift station adjacent to site 8 and another adjacent to site 10 . In addition to sewer pipe, there are approximately 55,000 meters of underground stormwater pipe that span the town including around the perimeter of TCE. Stormwater infrastructure data indicated three of 71 total discharge points are located within TCE . Distance between sites and stormwater outflows in the TCE ranged from five to 400 meters. Integrating FIB, qMST, and GIS-based infrastructure data A major goal of this project was to integrate stormwater and sewer infrastructure information with water quality data to develop a ranking system to identify potential areas in need of assessment, remediation, or structural testing . TCE is an ideal case study location for this analysis as the estuary is geographically small, highly valuable for recreational water quality activities, well-studied , and the town has detailed information about the infrastructure surrounding the site. In order to rank the samples sites as potential sources of fecal contamination, several factors were considered. First, the mean concentrations of EC, ENT, and HF183 at each site were used as the biological parameters. Next, the infrastructure data within a 400-meter radius of each site was calculated including the percentage of sewer pipes aged over 50 years, the percentage of sewer pipes made of VC, and the inverse of the distance to the nearest stormwater outflow. Two different models were used to assess potential fecal contamination sources using the six parameters, 1) an equal weighting of each parameter, and 2) a variable weight of each parameter with a higher weight given to those parameters that explicitly link fecal contamination to the site (EC, ENT, HF183). Details of the two models can be found in the methods and supporting information, . Enterococcus spp . and EC concentrations were significantly different between sites (p < 0.001) with Sites 1 and 2 having higher concentrations as compared to other sites across the estuarine transect. Mean ENT concentrations varied from a low of 12.3 (Site 10) to a high of 4504.3 MPN per 100 mL (Site 1, ). Of the 10 sites, four had mean ENT concentrations above recreational water quality standards and three of those sites were located in TCE headwaters. Mean EC concentrations varied from a low of 200.4 (Site 9) to a high of 6,429.6 (Site 6) MPN per 100 mL . Even though the State of NC does not utilize EC for recreational water quality management, we measured EC in this study because of its prominent use in other coastal states for recreational water quality management. Of the 10 sites, six had mean EC concentrations above the U.S. EPA standard of 320 MPN per 100 mL . Four of those seven sites (Sites 1, 2, 6, 7) had mean EC concentrations over 2,000 MPN per 100 mL. Although there was limited detection of the qMST marker, HF183, the data was included in the site ranking due to its direct link to human fecal contamination and subsequent linkage to human health (e.g. Boehm et al. 2015 ). The age of sewer pipes, pipe material, and proximity to stormwater discharge pipe outflows were also assessed and incorporated into our modelling effort. Within a 400-meter radius surrounding each site, between six to 76 pipes were observed. The percentage of pipes aged 50 years or older within a 400-meter radius of a sampling site were found to be significantly and positively correlated with FIB concentrations (ENT: r s = 0.48, p < 0.001; EC: r s = 0.52, p < 0.001) ( and ). Sites within TCE headwaters (Sites 1, 2, and 3) were found to have the highest percentage of sewer pipes aged 50 years or older . Additionally, pipes aged 50 years or older were found to be made of VC, which is known to be the least durable of the piping materials surrounding the estuary . Multiple sampled sites are adjacent to stormwater outflows, but Sites 6 and 7 are the closest at five and 15 meters from the nearest outflow, respectively . Site 6 is located within a stormwater ditch finger of the estuary and Site 7 is downstream from Site 6. There are also two stormwater outflows between Sites 1 and 2, approximately 60–80 meters from each site, which may also influence the level of fecal contamination detected . Sites with the highest mean EC, ENT, and HF183 concentrations frequently contained the largest percentage of sewer pipes aged over 50 and made of VC and/or were in close proximity to stormwater outflows ( and ). The highest three ranked sites (Sites 1, 2, and 6) were the same across both ranking models and had elevated FIB concentrations. The remaining rankings varied, with sites containing elevated fecal contamination markers (such as Site 4) being ranked higher in the variable weighted model than the equal weighted model. In both ranking models, three of the top five sites ranked as potential sources of fecal contamination are within the upper estuary limits of TCE, suggesting a potential persistent source of contamination is located in this area.
Water quality parameters, such as salinity, dissolved oxygen, and temperature, varied throughout the TCE and were positively associated with precipitation patterns (n = 80, and ). Channel salinity (ppt) ranged from 0.2 in creek headwaters (Site 1) to 36.4 in the open channel (Site 10). Salinity differed significantly by site (p < 0.05) under all conditions. Dissolved oxygen levels were lower at sites within creek headwaters (Site 1, mean saturation = 43.3%, equivalent to 3.6 mg/L) compared to the open channel (Site 10, mean saturation = 82.2% equivalent to 6.6 mg/L) over the 10-week study period. Water temperature was more stable across sites, with limited variability over the 10 weeks study (min. 24.5°C recorded at Site 1 and max 27.6°C recorded at Site 5). Enterococcus spp . concentrations ranged from 9 to 24,197 MPN per 100 mL (95% CI [0, ] and [16304, 47161] respectively), and EC concentrations ranged from 77.5 to 24,197 MPN per 100 mL (95% CI [32.5, 146.0] and [NA, Infinite] respectively) across all sites over the course of the study ( and Tables). The highest measured concentrations of FIB were observed in creek headwaters and locations adjacent to stormwater drains while lower concentrations were detected in the open, deeper, more tidally influenced reaches of the estuary. For example, Site 1 in the creek head waters had high FIB concentrations with ENT between 260.5 to 24,196.5 MPN per 100 mL (95% CI [162.5, 392.5] and [16304, 47161] respectively) and EC between 381.5 to 6,330.5 MPN per 100 mL (95% CI [257, 547] and [4142, 9108] respectively). In contrast, Site 10 in the open channel had ENT concentrations ranging from 9 to 20.5 MPN per 100 mL (95% CI [0, 37] and [4, 72] respectively) and EC concentrations ranging from 86.5 to 459 MPN per 100 mL (95% CI [41.5, 155.5] and [323, 630.5] respectively, – Tables). Fecal indicator bacteria concentrations trended higher across the transect after wet weather events (n = 40) when compared to dry weather (n = 40). Precipitation patterns were found to significantly and positively correlate with measured concentrations of ENT (previous 24 hours: r s = 0.50, p < 0.001 and previous 72 hours: r s = 0.52, p < 0.001), with mean concentrations of ENT following a wet weather event equal to 1,053.1 MPN per 100 mL (sd = 3877.4 MPN per 100 mL) compared to dry weather with mean concentrations of 104.4 MPN per 100 mL (sd = 299.4 MPN per 100 mL). The mean concentration of ENT during wet weather events exceeded the U.S. EPA and North Carolina Department of Environmental Quality (NCDEQ) recreational water quality standard of 104 MPN per 100 mL . Enterococcus concentrations were found to significantly and positively correlate with wet weather events (p <0.01). Additionally, ENT concentrations were significantly and negatively correlated with salinity (r s = -0.48, p < 0.001), dissolved oxygen (r s = - 0.74, p < 0.001), and water temperature (r s = - 0.68, p < 0.001) ( and ). In contrast to ENT concentrations, EC exceeded the U.S. EPA recreational water quality standard of 320 MPN per 100 mL during both wet and dry conditions with a wet weather mean concentration of 2644.7 MPN per 100 mL (sd = 6364.3 MPN per 100 mL) and a dry weather mean of 795.6 MPN per 100 mL (sd = 1404.6 MPN per 100 mL) across sites. Of the total samples collected during wet weather events (n = 40), 23 (57.5%) exceeded the U.S. EPA standard for EC and 22 (55%) exceeded the U.S. EPA standard for ENT compared to dry weather when 18 (45%) exceeded the EC standard and 5 (12.5%) exceeded the ENT standard. E. coli concentrations also had a significant and negative correlation with salinity (r s = -0.25, p < 0.05), dissolved oxygen (r s = -0.47, p < 0.001), and water temperature (r s = -0.32, p < 0.05), although these correlations were not as strong as with ENT concentrations ( and ). Additionally, there was no significant relationship between wet/dry weather events and EC concentrations. No FIB species were detected in any of the method blanks.
To identify if the source of fecal contamination observed in the TCE was of human origin, the human host-associated marker HF183 was measured in all samples and eight field blanks (one per sampling event). HF183 was detected in six of 80 samples (7.5% detection rate) with four of those detections following wet weather events; HF183 was not detected in any field blank . Concentrations ranged from a mean of 33.5 copies per 100 mL during dry weather (sd = 7.4 copies per 100 mL) to 108.3 copies per 100 mL during wet weather (sd = 36.9 copies per 100 mL); however, overall detection in the estuary was low. Each assay plate included three no extraction controls, three no template controls, and three positive controls in addition to the method blank samples from each collection event. HF183 was detected in each positive control and was not detected in any negative control or method blank, as expected. Fluorescence plots did not indicate assay inhibition in study samples due to the lack of partial amplification (rain) and the tight clustering of positive fluorescent signal observed in our fluorescence plots .
Stormwater and sewer pipe age, material type, and location were assessed using GIS data provided from the Town of Beaufort, NC. The GIS infrastructure layers were updated in the Town of Beaufort in Fall of 2021, providing an excellent opportunity to use the most updated information for this study. Beaufort, NC, has approximately 53,000 meters of underground sewer pipe. Along the perimeter of the TCE (approximately 400 meters), most sewer pipes were constructed in 1969 (52.3%) and 2008 (32%). Pipe material throughout the Town of Beaufort includes CIPP, DIP, PVC, truss, and VC. The VC pipes were all constructed in 1969. Around the perimeter of the TCE, most pipes are constructed out of VC (30.8%) and CIPP (25%). Additionally, there is one lift station adjacent to site 8 and another adjacent to site 10 . In addition to sewer pipe, there are approximately 55,000 meters of underground stormwater pipe that span the town including around the perimeter of TCE. Stormwater infrastructure data indicated three of 71 total discharge points are located within TCE . Distance between sites and stormwater outflows in the TCE ranged from five to 400 meters.
A major goal of this project was to integrate stormwater and sewer infrastructure information with water quality data to develop a ranking system to identify potential areas in need of assessment, remediation, or structural testing . TCE is an ideal case study location for this analysis as the estuary is geographically small, highly valuable for recreational water quality activities, well-studied , and the town has detailed information about the infrastructure surrounding the site. In order to rank the samples sites as potential sources of fecal contamination, several factors were considered. First, the mean concentrations of EC, ENT, and HF183 at each site were used as the biological parameters. Next, the infrastructure data within a 400-meter radius of each site was calculated including the percentage of sewer pipes aged over 50 years, the percentage of sewer pipes made of VC, and the inverse of the distance to the nearest stormwater outflow. Two different models were used to assess potential fecal contamination sources using the six parameters, 1) an equal weighting of each parameter, and 2) a variable weight of each parameter with a higher weight given to those parameters that explicitly link fecal contamination to the site (EC, ENT, HF183). Details of the two models can be found in the methods and supporting information, . Enterococcus spp . and EC concentrations were significantly different between sites (p < 0.001) with Sites 1 and 2 having higher concentrations as compared to other sites across the estuarine transect. Mean ENT concentrations varied from a low of 12.3 (Site 10) to a high of 4504.3 MPN per 100 mL (Site 1, ). Of the 10 sites, four had mean ENT concentrations above recreational water quality standards and three of those sites were located in TCE headwaters. Mean EC concentrations varied from a low of 200.4 (Site 9) to a high of 6,429.6 (Site 6) MPN per 100 mL . Even though the State of NC does not utilize EC for recreational water quality management, we measured EC in this study because of its prominent use in other coastal states for recreational water quality management. Of the 10 sites, six had mean EC concentrations above the U.S. EPA standard of 320 MPN per 100 mL . Four of those seven sites (Sites 1, 2, 6, 7) had mean EC concentrations over 2,000 MPN per 100 mL. Although there was limited detection of the qMST marker, HF183, the data was included in the site ranking due to its direct link to human fecal contamination and subsequent linkage to human health (e.g. Boehm et al. 2015 ). The age of sewer pipes, pipe material, and proximity to stormwater discharge pipe outflows were also assessed and incorporated into our modelling effort. Within a 400-meter radius surrounding each site, between six to 76 pipes were observed. The percentage of pipes aged 50 years or older within a 400-meter radius of a sampling site were found to be significantly and positively correlated with FIB concentrations (ENT: r s = 0.48, p < 0.001; EC: r s = 0.52, p < 0.001) ( and ). Sites within TCE headwaters (Sites 1, 2, and 3) were found to have the highest percentage of sewer pipes aged 50 years or older . Additionally, pipes aged 50 years or older were found to be made of VC, which is known to be the least durable of the piping materials surrounding the estuary . Multiple sampled sites are adjacent to stormwater outflows, but Sites 6 and 7 are the closest at five and 15 meters from the nearest outflow, respectively . Site 6 is located within a stormwater ditch finger of the estuary and Site 7 is downstream from Site 6. There are also two stormwater outflows between Sites 1 and 2, approximately 60–80 meters from each site, which may also influence the level of fecal contamination detected . Sites with the highest mean EC, ENT, and HF183 concentrations frequently contained the largest percentage of sewer pipes aged over 50 and made of VC and/or were in close proximity to stormwater outflows ( and ). The highest three ranked sites (Sites 1, 2, and 6) were the same across both ranking models and had elevated FIB concentrations. The remaining rankings varied, with sites containing elevated fecal contamination markers (such as Site 4) being ranked higher in the variable weighted model than the equal weighted model. In both ranking models, three of the top five sites ranked as potential sources of fecal contamination are within the upper estuary limits of TCE, suggesting a potential persistent source of contamination is located in this area.
Although estuarine water quality is routinely assessed using FIB and qMST approaches , municipalities struggle to use the data in the context of prioritizing infrastructure for monitoring and repair. Infrastructure remediation is a central priority of the Bipartisan Infrastructure Law , and collaborative conversations with city leaders in Beaufort, NC, emphasized the need to use a combination of water quality and infrastructure information to define locations for improvement. Here, we combined water quality data with GIS infrastructure data within a ranking system to prioritize sources of fecal contamination and locations for infrastructure remediation using TCE in Beaufort, NC, as a case study. We found that higher concentrations of FIB were identified in areas containing both aging sewer pipe, pipe materials prone to cracking, and stormwater outflows , suggesting local infrastructure may contribute to fecal contamination in the estuary. Additionally, local conditions, including weather events, contributed to levels of fecal contamination in the estuary. Samples were collected over a range of wet and dry weather events. Enterococcus spp . concentrations were found to differ significantly between wet and dry weather whereas EC remained high in the estuary under both conditions. The human fecal marker, HF183, was detected infrequently but trended higher during wet weather conditions. However, DNA extraction recoveries were highly variable and total detections of this marker were low over the course of this study. Previous studies in TCE have detected similar levels of FIB and HF183 through large-scale sampling , and found significant increases in FIB concentrations following precipitation events . Wet weather can increase the diffusion rate of fecal matter from soils into surrounding water bodies by infiltrating into and overwhelming compromised underground sewer infrastructure . In fact, in a previous study examining inflow and infiltration across 19 wastewater treatment plants in the coastal areas of eastern NC, the Town of Beaufort was highlighted as being one of the systems most strongly impacted by rainfall and sea level rise . Additionally, wet weather can increase the volume of stormwater runoff which can introduce fecal contaminants from urban areas into surrounding water bodies, especially in coastal communities where increased development replaces permeable surfaces . Stormwater pipes are often found adjacent to sewer pipes and sewage can infiltrate into stormwater pipes through cracks and damaged areas of pipes . In this study, the mean detection of EC was significantly higher at sites adjacent and downstream from stormwater outflows (Sites 6 and 7) and adjacent sewer infrastructure in the upper estuary (Sites 1 and 2) compared to other sampled locations . Although precipitation events correlated with higher concentrations of FIB, persistent quantification of EC and ENT was also identified during dry weather events, indicating the potential for chronic sources of fecal contamination to be entering the TCE. Sites that had high levels fecal contamination observed during dry weather were located within TCE headwaters where tidal influences are minimal (Sites 1, 2, and 3). This conflicts with previous studies in the area, where dry weather FIB concentrations of this magnitude was not observed . However, this previous study was completed more than a decade prior to the study presented here, and with the unprecedented rate of coastal development in the Town of Beaufort, the drivers of water quality impairment have changed. Other water quality monitoring studies have detected high levels of dry weather fecal contamination . These studies suggests that the source of dry weather fecal contamination is most likely failing sewage infrastructure as sanitary sewage pipes are under constant stress due to human use which allows sewage to exfiltrate out of cracks in the pipes under all weather conditions . These studies support our hypothesis that damaged or failing sewer infrastructure may contribute to the observed fecal contamination in the TCE. Fecal indicator bacteria and HF183 were negatively correlated with salinity and dissolved oxygen concentrations in the estuary, and similar correlations have been found in other studies . Korajkic and others suggest the negative relationship between salinity and FIB and qMST markers is due to induced osmotic shock which affects the expression of genes associated with membrane composition therefore hindering survivability. Moreover, other water quality monitoring studies have also found a negative relationship between FIB species and dissolved oxygen . Low dissolved oxygen can be caused by elevated levels of aerobic bacteria, like FIB, as they consume oxygen to perform metabolic processes. Thus, signs of lower dissolved oxygen are correlated with increased levels of bacteria. No significant correlations were found to exist between FIB and environmental conditions including tide height, wind speed, and wind direction. This conflicts with a 2018 study by Kiaghadi and Rifai : however, this difference may be due to the small size of the sampling area, and the general protective nature of the estuary in this study. TCE contained higher concentrations of EC and ENT in the upper TCE where tidal influences are lower and the residence time is longer compared to the downstream sites that had lower concentrations of EC and ENT and have stronger tidal influences and a shorter residence time. With the prevalence of King Tides increasing across eastern NC coastal systems, the influence of tidal inundation is already becoming more prevalent . Further investigation is needed to fully ascertain the impact of these environmental conditions on fecal contamination levels across the TCE. Significant and positive correlations were found between FIB concentrations and sewer pipe age and material. Sites with the highest rankings (Sites 1, 2, and 6) all have a high proportion of sewer pipes aged 50 years or older and are made of VC. Vitrified clay pipe material has been shown to be brittle and prone to failing . The VC piping surrounding Sites 1, 2, and 6 was constructed in 1969 making these pipes approximately 52 years of age at the time of this study. In coastal areas where saltwater intrusion and storm events are frequent, VC pipes have a high potential of being compromised. Cracks and eroded areas allow sewage to exfiltrate into the estuary which could explain the high levels of fecal contamination observed in creek headwaters in TCE. The ranking system defined in this study can be used in other systems to prioritize areas for infrastructure review and remediation. Here, we used two different weighting approaches to identify sites serving as potential sources of contamination. Although the top two “at-risk” sites were the same regardless of method, we suggest implementing the more rigorous variable weighting approach as this method uses empirical evidence of the included parameters which is more likely to capture sites serving as potential sources of contamination. This method places more emphasis on those sites with a measurable amount of contamination in the water while still allowing for other factors such as septic and sewage infrastructure around a site to help inform the final risk assessment. This case study supports implementing a combined approach for infrastructure prioritization near priority waters. However, the present study raises additional questions which should be addressed in future research. More frequent sampling that corresponds to local weather conditions, including wet and dry weather events, will allow for the incorporation of climate in the ranking model. Sampling across tide heights is also recommended to assess the role of dilution in measured concentrations of indicator species. Future studies could also consider using additional host-associated markers to identify the source of fecal contamination, as a study in Southern California noted the value in using a variety of sewage indicators beyond HF183 to differentiate between sources when applying a ranking system to stormwater outflows . Additional infrastructure information would be beneficial including flow, pipe diameter, and groundwater height as these factors can also contribute to fecal contamination; however, this information was not available in the Town of Beaufort records. Lastly, more sophisticated ranking models may be developed based on the case study presented here, and may include dye releases and/or tracking of other sewage-related chemical markers, such as caffeine and sucralose, to apply a more robust weighting method for parameters . Ranking potential sources of fecal contamination may be essential to helping small coastal communities like the town of Beaufort, NC, to better allocate resources in a changing climate. The results of this study indicate that FIB concentrations were strongly associated with local infrastructure. By ranking the study sites based on surrounding infrastructure and average bacterial concentrations, several sites were determined to be most at risk for contributing fecal contamination to the estuary and should be prioritized for review and remediation. This method is broadly applicable to estuarine ecosystems and may help improve water quality through infrastructure repair.
The simultaneous pressures of unprecedented rates of coastal development and aging and inundated sewage and stormwater infrastructure are forcing coastal towns across the coastal plains of southeastern USA to seek tools to prioritize infrastructure remediation. Given the increasing potential pressures on coastal waters, we strived to integrate routinely available microbial water quality monitoring, environmental parameter, and GIS-based infrastructure data into a framework that municipalities can use to prioritize sewage and stormwater infrastructure for potential remediation and to prevent contaminants from entering high priority recreational waters. Our study suggests that a simple ranking system can be used to integrate often readily available information which can then be applied to identify the magnitude and location of potential sources of fecal contamination in estuarine ecosystems, allowing municipalities to take action. In future applications of this ranking system, a long-term study conducted over 1–2 years may further elucidate the contribution of contamination from season, environmental, and infrastructure sources.
S1 Fig Representative ddPCR fluorescence plots of A) HF183 positive control, and B) gyrA positive control. (TIF) S1 Table Water temperature, dissolved oxygen (DO), salinity, and turbidity values for all sites over the course of the project. NAs indicate sample data was unavailable for the site and date. (DOCX) S2 Table Concentration and lower/upper confidence intervals of total coliforms for each site and the method blank on each sampling date (MPN per 100mL). (DOCX) S3 Table Concentration and lower/upper confidence intervals of Escherichia coli ( E. coli ) for each site and the method blank on each sampling date (MPN per100mL). (DOCX) S4 Table Concentration and lower/upper confidence intervals of Enterococcus for each site and the method blank on each sampling date (MPN per100mL). NAs indicate sample data was unavailable for the site and date. (DOCX) S5 Table a. Spearman rank correlation test statistics (rs) between fecal indicator bacteria (FIB) species and with environmental conditions and parameters. b. Spearman rank correlation test p-values between fecal indicator bacteria (FIB) species with environmental conditions and parameters. (DOCX) S6 Table Concentrations of HF183 (copies per 100 mL) and extraction recovery percentage for all sites and the method blank over the course of the project. ND indicates sample concentration below the limit of detection for HF183. (DOCX) S7 Table a. Spearman rank correlation test statistics (rs) between fecal indicator bacteria (FIB) species with piping materials and construction dates. b. Spearman rank correlation test p-values between fecal indicator bacteria (FIB) species with piping materials and construction dates. (DOCX) S8 Table a. Rank results for each measured parameter using the equal weighting method. b. Rank results for each measured parameter using the variable weighting method. (DOCX) S9 Table Minimum information for publication of quantitative Real-Time PCR experiments guidelines checklist (MIQE). (XLSX) S1 Method Design and sequence of the microbial source tracking control. (DOCX) S2 Method Two parameter weighting methods for ranking potential sources of fecal contamination in Town Creek Estuary, Beaufort, North Carolina. (DOCX) S1 Equation Equation to calculate gene copies/L from ddPCR. (DOCX)
|
The single most important lesson from COVID-19 – It is time to take public health seriously | 87fc9722-a0ef-40e5-a856-b7b1822cb0c4 | 8068778 | Preventive Medicine[mh] | We explored the number of hours medical students spend learning public health in medical curricula. We could not find any meaningful literature discussing ‘public health’ in this context as such. We then examined medical school curricula for topics that underpin public health, such as preventive medicine, environmental health, lifestyle health, health policy and systems. Several studies reported that many medical schools offer limited training in occupational and environmental health . Lack of awareness about environmental and occupational risks compromises physicians’ ability to effectively manage their patients and adequately protect themselves from occupational risks they face. The impact of COVID-19 on the health care workforce is well documented . For instance, the mortality among physicians particularly among the Black, Asian, and Minority physicians is of serious concern. Unaddressed metabolic risk factors, inadequate training in infection prevention, and limited access to personal protective equipment (PPE) were cited as reasons for this higher mortality . Patel et al. (2014) found that 40% of students in the United States of America (US) reported inadequate instruction in health policy, and that only a minimal improvement had been observed over the years . Physicians need leadership and advocacy skills to support the health care needs of the populations they serve. Medical education programs are inadequate in providing such skills, critical to advancing and promoting public health . A United Kingdom study established that while students were interested in receiving leadership and management education, the existing curriculum was deficient in this area . These inadequacies have had repercussions during the COVID-19 pandemic. Physicians faced difficulty advocating for their own PPE access, while others in managerial positions were oblivious to their peer needs, demonstrating a lack of leadership and empathy — key skills imparted by public health education. Satisfactory medical education would require a robust public health curriculum that is practical and field-oriented, well-qualified teachers and facilitators, and collaborators who provide students with real-world experience. Currently, such a model is elusive, with heterogeneity and fragmentation presenting a challenge to teaching and evaluating public health education in medical schools. Globally, improvements in national health budget allocation and investment in public health has translated into an increase in life expectancy and a reduction in maternal and infant mortality albeit with a slow minimalistic approach. In 2016, public health (preventive care) accounted for just 12% of global health expenditure , while in 2017 the Organization for Economic Cooperation and Development (OECD) countries invested only 2.8% of their total health expenditure in public health . In 2018, only 3% of all US health care spending was allocated to public health. quantifies preventive care (as a surrogate of public health) investment as a percentage of GDP and total health expenditure made by OECD countries. We ranked countries (high to low) according to the share of their preventive care investment within the overall health portfolio for 2018 and then included the top five countries — data available for OECD (2010-2018). Among the OECD countries, Canada had the highest investment, spending 6% of its total health spending in 2018 on public health interventions/programs . The non-OECD countries seemingly spend a larger proportion of their total health spending on preventive care, but this often is the context of a smaller GDP and, consequently, smaller overall allocated health budget . The trends demonstrate either plateauing or stagnating investment in preventive care, in spite of the staggering returns of public health interventions in high income nations, currently at 14.3:1 – and likely similar in low and middle income nations. Enhanced governmental spending in the public health sector can stimulate long-term economic growth, and improve health, which will consequently result in lower overall health care costs. At an individual level, the link between health investment on the one hand and productivity and income on the other are indubitable. The National Institute of Health (2012-2017) research portfolio analysis found that only 16.7% of projects and 22.6% of the total research allocation were for primary and secondary prevention research in the US, with less than 5% of the projects choosing an outcome related to the leading risk factors for death and disability in the country. The funded projects were mostly observational and secondary research and only a few intervention-focused . Inadequate investment in public health research presents a challenge in high-income countries and has serious consequences in low and low-middle income countries as more targeted high-impact investments are needed, given the resource limitations. Public health research should be a part of a systematic process in which available resources are directed to finding solutions to major health challenges confronting a country. In general, countries lack a transparent prioritization process that ensures the available meagre funding is invested in high impact areas . The complexity of issues surrounding public health propels researchers to focus their efforts on easy-to-conduct research – for example cross-sectional studies – with limited utility. In our opinion, such studies, particularly with repetitive or inconclusive findings are costly and may take away the focus from more significant and useful studies, in areas such as implementation research, which may strengthen public health preparedness. The never-ending debate on the lack of high-quality evidence for facemask use in community settings during COVID-19 is a case in point. Much of the interventions undertaken by public health practitioners are not evaluated in a scientific manner for want of epidemiological and biostatistical skills ; even when evaluated, the research findings are not disseminated and reported in peer reviewed journals as practitioners may not have the time to write manuscripts. These challenges further emphasize the need for medical curricula reform and for a stronger commitment by governments and national institutions to develop and support the field of public health. There has been a significant increase in life expectancy worldwide. New diagnostic, preventive, and treatment approaches have reduced death rates. However, a high prevalence of chronic disease, emerging and reemerging patterns of infectious disease, and social factors such as health inequity present serious challenges. Additionally, global climate change, natural disasters, rapid urbanization, deforestation and subsequent closer contact with animals, migration, and conflicts are increasing the threat posed by pathogens. A continued focus only on the biomedical, clinical approach to treating disease is a disservice to humanity, especially in light of the staggering – and ever increasing – economic cost of such an approach. Advocating for the role of public health in policy formation and community education; strengthening the public health curriculum in medical education; and increasing the investments in evidence-based public health interventions are actions which are necessary and required. Promoting public health research, particularly in low- and low-middle-income countries to find custom-tailored solutions to local health problems will be invaluable. With COVID-19, the world has paid a tremendous price, both in terms of human suffering and social and economic disruption. Repetition of such an event appears unfortunately not impossible as the world has already witnessed glimpses of such pandemics in the form of severe acute respiratory syndrome (SARS-CoV) and the Middle East respiratory syndrome (MERS-CoV). We can be sure that pandemics will occur again. We cannot face them equally unprepared and equally incapable of a swift and unified global response. The priority placed on public health has been low for far too long. This complacency must end now. It is time to take public health seriously. |
Antigen test swabs are comparable to nasopharyngeal swabs for sequencing of SARS-CoV-2 | 5e97f759-13d3-4989-acee-cef8839e6d3b | 10338537 | Internal Medicine[mh] | Coronavirus disease 2019 (COVID-19) has highlighted the critical public health role of continual testing and viral genomic surveillance for tracking emerging variants, understanding transmission, linking viral evolution to changes in disease epidemiology, designing and evaluating diagnostic tools, and forecasting vaccine efficacy in the context of viral diversity , . COVID-19 caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has led to approximately 760 million confirmed cases and 6.9 million deaths reported worldwide by the World Health Organization (WHO) . SARS-CoV-2 genome evolution throughout the pandemic has led to the continual emergence of new variants with increased transmissibility, disease severity, and capacity for immune escape – . Since the first SARS-CoV-2 genome sequences were published in January 2020 , over 15 million sequences have been shared via the Global Initiative on Sharing All Influenza Data (GISAID) database , and over 7 million nucleotide sequences have been deposited in National Center for Biotechnology Information (NCBI) ( https://www.ncbi.nlm.nih.gov/sars-cov-2/ ) through 29 May 2023. The unprecedented effort to monitor SARS-CoV-2 viral evolution has permanently changed the approach to pathogen genomic surveillance worldwide. SARS-CoV-2 genome sequencing approaches have most widely been applied to positive diagnostic samples from nucleic acid amplification testing (NAATs). The gold standard and most widely used NAAT is Real-Time Reverse Transcription Polymerase Chain Reaction (RT-PCR). As both viral sequencing approaches and RT-PCR directly amplify viral genomic material, the collection methods and reagents and downstream protocols overlap, making it a useful approach for genomic surveillance. This has been an effective approach since RT-PCR was the most widely used approach early in the pandemic. The testing landscape, however, has shifted considerably over the course of the pandemic towards rapid diagnostic tests (RDTs), most commonly antigen-based lateral flow tests (LFTs). There are now more than 400 SARS-CoV-2 commercially available RDTs worldwide , , and several antigen-based LFTs are authorized for over the counter home testing through emergency use authorization (EUA) in the US . Antigen-based assays detect specific viral proteins or the virus directly without PCR amplification steps . The versatility of LFTs for broad application in schools, clinics, and home settings has significantly increased their use. Further, in an effort to increase COVID-19 detection, the US has made LFTs freely available through mail order, subsequently distributing over 270 million test kits as of March 2022 . The sensitivity of antigen-based LFTs is comparatively lower than NAATs, especially in cases of low viral load or asymptomatic infection , – ; however, when used within 5–7 days of onset among symptomatic individuals, the test can achieve 99.2% sensitivity and 100.0% specificity . When compared to NAATs, LFTs perform well with viral loads corresponding to a RT-qPCR Ct value ≤ 33 cycles – . As robust viral genomic surveillance hinges on acquiring positive cases through testing, changes to testing practice will impact surveillance efforts if laboratory workflows are not robust to sample type. The ability to use previously collected swabs from positive LFTs for genomic analysis would be of particular benefit. As testing practices in US and abroad continue to shift, a greater proportion of testing will be conducted via LFTs. The ability to sequence from LFTs will allow researchers to obtain representative viral samples spanning the geographic and epidemiological scope of the pandemic, as viral genomic surveillance efforts continue throughout the subsequent phases of the response. Further, more LFT testing is likely to occur outside of the healthcare setting. This change could significantly reduce available samples for viral genomic surveillance and skew the available samples to only those tests performed in a clinical setting, which would result in a bias toward more severe cases. Capturing samples from LFTs would expand the representation of genomic surveillance. To this end, we compared the ability to use swabs collected from LFTs for viral genome sequencing to nasopharyngeal (NP) swabs used for NAATs. Primarily, we sought to determine whether the extraction reagent or other component of sample processing used for BinaxNOW COVID-19 Ag Card testing disrupted the ability to perform SARS-CoV-2 genome sequencing.
Of the 690 samples, 611 had detectible virus in the sample based on RT-qPCR Ct values after RNA extraction. There was a significant difference between NAAT samples and LFT that failed to amplify with a greater proportion of LFT samples successfully extracted (80.7% N = 109 NAAT samples vs 90.5% N = 502 LFT, p < 0.00001) (Fig. A). Among the samples that had detectable virus, we compared the RT-qPCR Ct values and found no significant difference between NAAT and LFT samples (median of 21.7 for NAAT and 21.9 for LFT, p = 0.27) (Fig. B). Using a cut-off Ct value of ≤ 30, 519 samples (78 NAAT and 390 LFT) were moved forward to viral genome sequencing. Subsequently, we found that there was no significant difference between NAAT vs LFT samples in the proportion that failed sequencing (8.1% NAAT vs 10.3% LFT, p = 0.48) and only a moderate significant difference in median sequencing coverage (median of 183 for NAAT vs 199 LFT, p = 0.0018) (Supplementary Fig. A). The lineage assignments for sequenced samples are shown in Fig. . Most samples (96.2%) were assigned to the SARS-CoV-2 Omicron variant (BA.1 and BA.1.1). Comparing the time from sample collection to RNA extraction identified that the time to extraction was significantly shorter for LFT samples due to the logistics of our sample acquisition process (median of 20 days for NAAT vs 6 days for LFT, p < 0.00001) (Supplementary Fig. B). To assess the impact on time to extraction on outcomes, we compared results for each sample type independently. We did not find any significant difference in time to extraction for passing or failing outcomes for NAAT swabs (median of 19 days for pass vs 23 for failed, p = 0.15) (Supplementary Fig. C) or LFT swabs (median of 6 days for pass vs 7 for failed, p = 0.16) (Supplementary Fig. D). Despite our lack of association of time to extraction on sequencing outcomes, we still aimed to rule out the effect of difference in time to extraction between NAAT and LFT swabs; we thus performed a sub-analysis by down-sampling based on time to extraction. We limited the subsample to samples with a time to extraction from 14 to 21 days. This resulted in 41 NAAT swabs and 44 LFT swabs, of which 35 NAAT and 38 LFT had detectable Ct values with no significant difference in failure by swab type (p = 0.6). After subsetting the data, the median time to extraction for both NAAT and LFT swabs was 16 days with no significant difference (p = 0.352) (Supplementary Fig. A). Further, there was no significant difference in time to extraction for samples that did not amplify for NAAT or LFT swabs (Supplementary Fig. B). For the subsample, RT-qPCR Ct value were again not significantly different between NAAT and LFT swabs (median of NAAT 19.9 vs 20.9 for LFT, p = 0.404) (Supplementary Fig. C). Using a cutoff Ct value of ≤ 30, 35 (85%) NAAT and 38 (86%) LFT samples moved forward to sequencing. There was also no significant difference in the proportion of samples from the subset that failed sequencing or difference in sequencing coverage among those successfully sequenced (Supplementary Fig. D). The statistical significance of the sub-analysis was unchanged when bootstrapping the sub-groups to generate groups of 100 for each comparison leading us to believe that the smaller sample sizes were not obfuscating significant changes. To further examine the effect of time to extraction on coverage and RT-qPCR values, we performed a Spearman’s rank-order correlation analysis. We did not identify any significant correlation between time to extraction and RT-qPCR CT values in the total data (R = − 0.02, p = 0.65) or the subset data (R = 0.09, p = 0.52). There was also no significant correlation between time to extraction and coverage in the total dataset (R = − 0.02, p = 0.7) or the subset data (R = − 0.22, p = 0.1).
By sequencing samples derived from NAAT NP swabs and samples derived from positive BinaxNOW COVID-19 Ag Card swabs and comparing the proportions of successful sequencing, genome coverage, and RT-qPCR Ct values, we demonstrate that extraction reagents and sample processing do not significantly impact the ability to recover SARS-CoV-2 viral genomes. This suggests that positive LFT swabs could serve as a viable alternative for genomic surveillance. When comparing the sequencing results from LFT cards to NAAT swabs, there was no significant difference between the proportion of failed samples, genome coverage, or RT-qPCR Ct values. We did observe that NAAT samples had a significant, but marginal, longer time to extraction than the LFT samples. However, we did not observe a significant correlation between time to extraction and genome coverage or RT-qPCR Ct values, which implies that difference in time to extraction does not have an overt impact. These findings indicate that sequencing of positive LFT swabs could compliment traditional sequencing of NAAT swabs while yielding very similar sequencing quality. The increased ability to extract and amplify viral RNA from LFTs is consistent with their test performance, as previous studies have shown that they are more likely to report positive results with higher viral loads corresponding to RT-qPCR Ct values < 30. This may partially explain the high success in viral genome sequencing from positive LFT swabs. The ability to use swabs from RDTs for SARS-CoV-2 genome sequencing is important for future surveillance efforts. As RDTs such as the BinaxNOW COVID-19 Ag Cards become more accessible and ubiquitous, NP swabs will become increasingly limited to a clinical setting where NAATs are routinely employed. This change could potentially bias the genomic surveillance data towards more severe cases that require clinical intervention. Capturing samples from RDTs performed in the home or clinic setting would enable us to generate surveillance data that is more representative. In addition, by using clinical excess from previously collected swabs, we also eliminate the need for collection of a second swab, which can simplify IRB protocols for clinical studies and increase study participation. The future of over-the-counter RDTs will depend on the course of the pandemic. While the COVID-19 public health emergency declaration recently expired in May 2023, the EUA allowing their use remains in place. Henceforth, manufacturers will need apply for formal FDA approval for continued use. However, it seems that the widespread use of RDTs during the pandemic could signal a paradigm shift for the availability of at-home and clinic-based infectious disease testing, and we feel that our findings may inform surveillance strategies for epidemiologically similar pathogens for which RDTs are currently employed. Undeniably, the availability of over-the-counter RDTs has provided agency to the public, allowing individuals to use testing to manage their personal risk and aid in decision making. Home testing could conceivably be expanded to other respiratory pathogens such as influenza virus or respiratory syncytial virus (RSV), which are commonly diagnosed in the outpatient setting using rapid antigen tests. With this possibility in mind, we must rethink the future of viral genomic surveillance so that sampling of cases in the community remains robust. One possible solution for obtaining samples from at-home testing would be to partner with the government’s free at-home COVID-19 test program to provide a subsample of recipients with prepaid postage and mailing containers with viral transport media (VTM) that could be used to send positive samples to sequencing centers. Remuneration could also be considered to incentivize participation. While at-home storage and transport conditions may vary considerably, direct-to-consumer genetic testing through companies like 23andMe and AncestryDNA provide a model for collection and transport of samples with the intended application of sequencing. Further, several studies have extensively evaluated the stability of SARS-CoV-2 RNA in a number of storage and transport conditions with and without transport media and cold storage. In the study by Alfaro-Nunez et al., they found that viral RNA remained stable on dry non-buffered swabs for up to 26 days when left at room temperature . Similar studies found that qRT-PCR Ct values remained stable among samples stored in phosphate-buffered saline or VTM at room temperature for up to 28 days regardless of viral loads – . While these studies have largely focused on qRT-PCR performance, the robustness of SARS-CoV-2 genome sequencing has been demonstrated through the ability to successfully reconstruct viral genomes from seemingly complex or low-quality samples including wastewater and environmental surfaces , . With the current level of at-home testing, even a modest sequencing failure rate of samples collected through a mail-in program would significantly improve community-based genomic surveillance. In the outpatient healthcare setting, primary care clinics performing RDTs are equipped to collect and store samples as described in this study and would provide a viable source for community-based sampling much like CDC’s U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet). Our study is not without limitation. Due to the observational nature of our study, we are not able to directly compare sequencing success of different sample types from the same participant. Furthermore, we did not consider vaccine history or disease severity which may vary between settings (i.e., NAAT: hospital, RDT: clinic); however, the study population from which these samples were obtained was generally healthy and likely experienced mild disease. As a note, samples that fail sequencing may be due to technical errors in library preparation; however, we expect this effect to be independent of swab type. Further, we report a significant difference in the time to RNA extraction between the two groups. To mitigate this issue, we conducted our analysis using a sub-set of the total data in which the time to extraction was between 14 and 21 days. This subset resulted in a similar number of samples between the two groups with similar time to extraction. Finally, in our study, positive BinaxNOW COVID-19 Ag Card swabs were removed from the card, placed in transport media, and stored in a clinic refrigerator until transport to the laboratory. While this protocol may function in clinic and outpatient settings, it may not be well suited for at-home testing, leaving the question of real-world viability unanswered. Indeed, we did not systematically evaluate the effect of variation in transport and storage conditions on viral genome sequencings; however, the previous studies described above have demonstrated that viral RNA stability is robust to storage duration and condition , , . While subsequent studies using parallel sampling from the same individual or a variety of currently marketed RDTs could resolve these limitations, we believe our current study demonstrates the ability to successfully sequence SARS-CoV-2 from swabs used for LFTs. Overall, we show that sequencing LFT swabs is not only possible, but also results in comparable RT-qPCR Ct values, genome coverage, and sequencing failure rates. These findings provide the foundation for community-based viral genomic surveillance, which will allow public health to maintain representative sampling cases as we continue pandemic mitigation efforts.
Collection of samples|participant recruitment A total of 690 testing swabs were collected from NP samples from NAAT positive tests performed on the BioFire Torch using the Respiratory 2.1 panel (hereon referred to as NAAT, N = 135) or from positive BinaxNOW rapid antigen LFTs (N = 555). NP swabs were collected from children seeking care at a local hospital in Orlando, FL between October 2021 and February 2022. Positive NP NAAT swabs were placed in Zymo Research DNA/RNA shield VTM and stored at 4 °C at the healthcare facility until weekly scheduled pickup. The samples were then transported to the research laboratory in a Styrofoam cooler and ice following the U.S. Department of Transportation Hazardous Materials regulations and subsequently stored at 4 °C until RNA extraction. LFT swabs were mainly collected from college-aged individuals seeking care at university student health service clinic during the same period, from October 2021 to February 2022. BinaxNOW rapid antigen LFTs were preformed according to the manufacture, which requires the anterior nares swab to be inserted directly into the test card. After identification of positive specimens, swabs were removed from the test cards and placed in Zymo Research DNA/RNA shield and stored at 4 °C in the clinic until daily pickup. Swabs were transported by courier to the research laboratory, which was located adjacent to the clinic, and stored at 4 °C until RNA extraction. SARS-CoV-2 RNA extraction and RT-qPCR RNA extraction for all samples was preformed using the QIAamp 96 virus QIAcube HT kit automated platform. Our RT-qPCR reactions were carried out in a 10 μL reaction using 4 × TaqPath master mix (Thermo Fisher Scientific, Massachusetts, USA), 0.25 μM each of 2019-nCoV_N1(CDC) qPCR probe (5′-FAM-ACCCCGCATTACGTTTGGTGGACC-BHQ1-3′), forward primer (5′-GACCCCAAAATCAGCGAAAT-3′), and reverse primer (5′-TCTGGTTACTGCCAGTTGAATCTG-3′), 4.25μL of molecular-grade H 2 O, and 2.5μL of template RNA. RT-qPCR was performed on a CFX Opus 96 instrument (Bio-Rad Laboratories, Hercules, California, USA) with the following conditions: UNG incubation at 25 °C for 2 min; reverse transcription step at 50 °C for 15 min, followed by polymerase activation at 95 °C for 2 min, and finally, 35 cycles of amplification at 95 °C for 15 s and 55 °C for 30 s. All samples were run in duplicate, including positive and no template controls. SARS-CoV-2 viral genome sequencing Samples with RT-qPCR Ct values ≤ 30 were selected for sequencing. Samples were prepared and sequenced according to the Oxford Nanopore Technologies Midnight RT-PCR expansion pack (EXP-MRT001) along with the Rapid Barcoding Kit 96 (SQK-RBK110.96) protocol. In brief, viral cDNA was reverse-transcribed, followed by tiled PCR amplification, rapid barcode ligation, pooling, and SPRI bead clean-up. Libraries were sequenced using flow cells (R9.4.1) with the GridION. Base-calling and demultiplexing were performed in real-time using the GridION software. The assembly was performed in two steps (using default parameters) following the ARTIC Network bioinformatics protocol ( https://artic.network/ncov-2019/ncov2019-bioinformatics-sop.html ). The gupplyplex script was used for quality control and filtering of reads (fragments 1000–1500 bp) followed by assembly with the MinION pipeline, using medaka to call variants, with Wuhan-Hu-1 reference (GenBank accession number MN908947.3). We then used the pangolin software tool to assign the lineage of each sample. LFT vs NAAT performance comparison To evaluate the suitability of samples obtained from positive rapid antigen tests for use in viral genome sequencing, we compared viral RNA extraction, RT-qPCR, and sequencing success to samples collected from the clinical excess of positive NAATs. We first assessed for statistically significant differences between the date of collection and date of viral RNA extraction between the two samples. We then compared the frequency of samples that failed to amplify during RT-qPCR and the resulting cycle threshold (Ct) values among those that amplified. The Ct value is inversely proportional to the amount of viral target in the sample—lower Ct values are associated with a greater quantity of virus and higher values are associated with a lower quantity. Last, we assessed sequencing success (failed samples, viral genome coverage, and genomes passing sequencing QC) between the two groups. Chi squared statistic was used to compare frequencies between categories (e.g., pass/fail for NAAT vs LFT) and the Mann–Whitney U test was used to determine if the values between two groups were significantly different sizes. Additionally, for the sub-analysis assessing the potential association of time to extraction on sequencing outcomes, we tested the robustness of the Mann–Whitney U test for our sample size by bootstrapping the sub-groups (41 and 44, respectively) to generate two groups of 100 for each comparison and re-running the statistical test and we found that the significance, as defined by a p-value < 0.05, was not impacted. All statistical analysis was performed using python 3.10.2 . All visualizations were produced using Rstudio running v 3.6.0 . Ethics statement This study was reviewed by the University of Central Florida Institutional Review Board and received a non-human subject determination.
A total of 690 testing swabs were collected from NP samples from NAAT positive tests performed on the BioFire Torch using the Respiratory 2.1 panel (hereon referred to as NAAT, N = 135) or from positive BinaxNOW rapid antigen LFTs (N = 555). NP swabs were collected from children seeking care at a local hospital in Orlando, FL between October 2021 and February 2022. Positive NP NAAT swabs were placed in Zymo Research DNA/RNA shield VTM and stored at 4 °C at the healthcare facility until weekly scheduled pickup. The samples were then transported to the research laboratory in a Styrofoam cooler and ice following the U.S. Department of Transportation Hazardous Materials regulations and subsequently stored at 4 °C until RNA extraction. LFT swabs were mainly collected from college-aged individuals seeking care at university student health service clinic during the same period, from October 2021 to February 2022. BinaxNOW rapid antigen LFTs were preformed according to the manufacture, which requires the anterior nares swab to be inserted directly into the test card. After identification of positive specimens, swabs were removed from the test cards and placed in Zymo Research DNA/RNA shield and stored at 4 °C in the clinic until daily pickup. Swabs were transported by courier to the research laboratory, which was located adjacent to the clinic, and stored at 4 °C until RNA extraction.
RNA extraction for all samples was preformed using the QIAamp 96 virus QIAcube HT kit automated platform. Our RT-qPCR reactions were carried out in a 10 μL reaction using 4 × TaqPath master mix (Thermo Fisher Scientific, Massachusetts, USA), 0.25 μM each of 2019-nCoV_N1(CDC) qPCR probe (5′-FAM-ACCCCGCATTACGTTTGGTGGACC-BHQ1-3′), forward primer (5′-GACCCCAAAATCAGCGAAAT-3′), and reverse primer (5′-TCTGGTTACTGCCAGTTGAATCTG-3′), 4.25μL of molecular-grade H 2 O, and 2.5μL of template RNA. RT-qPCR was performed on a CFX Opus 96 instrument (Bio-Rad Laboratories, Hercules, California, USA) with the following conditions: UNG incubation at 25 °C for 2 min; reverse transcription step at 50 °C for 15 min, followed by polymerase activation at 95 °C for 2 min, and finally, 35 cycles of amplification at 95 °C for 15 s and 55 °C for 30 s. All samples were run in duplicate, including positive and no template controls.
Samples with RT-qPCR Ct values ≤ 30 were selected for sequencing. Samples were prepared and sequenced according to the Oxford Nanopore Technologies Midnight RT-PCR expansion pack (EXP-MRT001) along with the Rapid Barcoding Kit 96 (SQK-RBK110.96) protocol. In brief, viral cDNA was reverse-transcribed, followed by tiled PCR amplification, rapid barcode ligation, pooling, and SPRI bead clean-up. Libraries were sequenced using flow cells (R9.4.1) with the GridION. Base-calling and demultiplexing were performed in real-time using the GridION software. The assembly was performed in two steps (using default parameters) following the ARTIC Network bioinformatics protocol ( https://artic.network/ncov-2019/ncov2019-bioinformatics-sop.html ). The gupplyplex script was used for quality control and filtering of reads (fragments 1000–1500 bp) followed by assembly with the MinION pipeline, using medaka to call variants, with Wuhan-Hu-1 reference (GenBank accession number MN908947.3). We then used the pangolin software tool to assign the lineage of each sample.
To evaluate the suitability of samples obtained from positive rapid antigen tests for use in viral genome sequencing, we compared viral RNA extraction, RT-qPCR, and sequencing success to samples collected from the clinical excess of positive NAATs. We first assessed for statistically significant differences between the date of collection and date of viral RNA extraction between the two samples. We then compared the frequency of samples that failed to amplify during RT-qPCR and the resulting cycle threshold (Ct) values among those that amplified. The Ct value is inversely proportional to the amount of viral target in the sample—lower Ct values are associated with a greater quantity of virus and higher values are associated with a lower quantity. Last, we assessed sequencing success (failed samples, viral genome coverage, and genomes passing sequencing QC) between the two groups. Chi squared statistic was used to compare frequencies between categories (e.g., pass/fail for NAAT vs LFT) and the Mann–Whitney U test was used to determine if the values between two groups were significantly different sizes. Additionally, for the sub-analysis assessing the potential association of time to extraction on sequencing outcomes, we tested the robustness of the Mann–Whitney U test for our sample size by bootstrapping the sub-groups (41 and 44, respectively) to generate two groups of 100 for each comparison and re-running the statistical test and we found that the significance, as defined by a p-value < 0.05, was not impacted. All statistical analysis was performed using python 3.10.2 . All visualizations were produced using Rstudio running v 3.6.0 .
This study was reviewed by the University of Central Florida Institutional Review Board and received a non-human subject determination.
Supplementary Figures. Supplementary Table 1.
|
Cervical pedicle screw fixation with the Tianji orthopedic surgical robot | 5c5ef98f-2ee7-4e5f-8243-942adbd353b2 | 11792645 | Surgical Procedures, Operative[mh] | Cervical spinal disease is considered one of the leading causes of human disability. The preferred treatment for cervical spinal disease is conservative therapy, and surgery is considered when conservative therapy is ineffective or when it seriously impairs quality of life . The placement of screws, including pedicle screws and lateral mass screws, is of significance to maintain the stability of the cervical spine . However, compared to lateral mass screws, pedicle screws have higher technical requirements and greater risks, so their application is limited. The technique of CPS fixation was first studied by Abumi et al. in 1994 and has been widely applied in clinical practice due to its excellent biomechanical stability . Compared to the thoracic and lumbar vertebrae, the cervical vertebrae have smaller diameter of pedicles , which makes it very difficult to implant pedicle screws. Besides, the cervical spine is in close proximity to many important tissues, including the spinal cord, vertebral artery, and nerve roots. Therefore, improper placement of screws may lead to decreased stability, even serious neurological or vascular complications . Traditional fluoroscopy-assisted pedicle screw placement is judged through intraoperative two-dimensional and preoperative CT reconstructed images to determine the entry point and direction of the screws . The surgeon’s spatial judgment and proficiency in operation have a significant impact on the accuracy of screw placement, resulting in differences in accuracy among different surgeons. Especially when placing pedicle screws in the cervical spine, it is more difficult and increases the possibility of poor prognosis, making high technical requirements for pedicle screw placement, and a long learning curve under fluoroscopy-guided screw placement. In recent years, the emergence of orthopedic surgical robots is expected to solve this problem. In 1992, the ROBODOC was the first robot to be employed for orthopedics by boring a hole in the femoral head, allowing surgeons to optimize prosthesis size for total hip arthroplasty . After years of improvement, the Tianji Robot (TINAVI Medical Technologies Co., Ltd., Beijing, China) was approved for clinical use by the China Food and Drug Administration in 2016, and achieved the world’s first robot-assisted upper cervical spine surgery . With the assistance of orthopedic surgical robot, the surgeon only needs to plan the screw path on the image, control the robot arm to run to the specified position, and the sleeve at the end of the robot arm can accurately indicate the screw point and the direction of the screw path . This study retrospectively compared the accuracy of screw implantation in cervical spine surgery using this robot-assisted technique versus conventional fluoroscopy-assisted free-hand technique. Intraoperative blood loss, duration of surgery, and postoperative hospital stay length were also compared. The safety of pedicle screw implantation was evaluated based on postoperative complications.
Study design and participants In this case-control study, patients were recruited and managed at Jiangsu Provincial People’s Hospital (JSPH). The study commenced two years after the introduction of the Tianji Robot for spinal surgery at JSPH in order to avoid the learning curve . Inclusion criteria were (1) newly diagnosed cervical spinal disease requiring pedicle screw fixation; (2) patients undergoing posterior cervical internal fixation with CPS assisted by orthopedic surgical robot or traditional fluoroscopy. Exclusion criteria were the presence of (1) severe osteoporosis; (2) old fractures; (3) severe pedicle deformity; (4) cervical pedicle with diameter smaller than the screw diameter (3.5 mm); (5) preoperative CTA examination indicating that unilateral vertebral artery stenosis or atresia and (6) severe systemic disease or coagulation disorder. Other types of screws, such as lateral mass screws, were excluded. Participants characteristics From March 2021 to March 2024, altogether 95 patients were treated with posterior cervical spinal surgery using either Tianji orthopedic surgical robot-assisted or traditional fluoroscopy-assisted pedicle screw implantation technology, including 44 cases in the orthopedic surgical robot group and 51 cases in the traditional fluoroscopy group. The orthopedic surgical robot group consisted of 30 males, 14 females, aged from 23 to 82 years old, with a median age of 57 years. There were 24 cases of cervical fracture with cervical dislocation or not, 13 cases of cervical spinal stenosis, 1 case of benign intraspinal tumor, 3 cases of cervical malignancy, 1 case of cervical kyphosis, 1 case of congenital cervical deformity, and 1 case of basilar invagination in this group. The traditional fluoroscopy group consisted of 37 males, 14 females, aged from 40 to 81 years old, with a median age of 59 years. In this group, there were 22 cases of cervical fracture with cervical dislocation or not, 19 cases of cervical spinal stenosis, 5 case of benign intraspinal tumor, 5 cases of cervical malignancy. Interventions The doctors of CPS implantation were senior spine surgeons with many years of experience in implantation of pedicle screws manually and were skilled in using robots for pedicle screw fixation. All participants received the pedicle screw fixation assisted by orthopedic surgical robot or conventional fluoroscopy. Orthopedic surgical robot-assisted cervical spinal surgery The patient is placed in a prone position on a Jackson table after general anesthesia, with the head secured by a Mayfield frame, and the area is sterilized and covered with a sterile sheet. A longitudinal midline incision is made at the back of the neck, and the target segment’s spinous processes, pedicles, and facet joints are exposed by subperiosteal dissection. The human navigation recognition framework is fixed to the distal spinal process or the proximal head frame. The orthopedic surgical robot system (TINAVI Medical Technologies Co., Ltd., Beijing, China) with a 3D “C”-arm x-ray machine (Siemens Medical Solutions, Erlangen, Germany) is used for standard operating procedures to perform 3D scanning. After the scan is completed, the pedicle trajectory is planned on the robot workstation and the mechanical arm is controlled to the target area. A guide needle sleeve is inserted to confirm the pedicle entry point, and the cortical bone is ground off using a grinding drill at the entry point. A 1.2 mm diameter pedicle guide needle is drilled into the bone using an electric drill along the sleeve, and the procedure is repeated with the mechanical arm to insert all guide needles. After confirming the position of the guide needle through fluoroscopy, a 2.7 mm hollow drill is used to enlarge the hole along the guide needle direction, and the guide needles are removed and tapped. A round-headed probe is used to check the screw hole, and the 3.5 mm diameter pedicle screw is tightened(Figure ). Conventional Fluoroscopy-assisted Cervical Spinal Surgery: The patient is placed in a prone position after general anesthesia, with the head secured by a Mayfield frame, and the area is sterilized and covered with a sterile sheet. A longitudinal midline incision is made at the back of the neck, and the target segment’s spinous processes, pedicles, and facet joints are exposed by subperiosteal dissection. Refer to Abumi pedicle implantation method to confirm the pedicle screw insertion point . Use a grinding drill to remove the cortical bone at the insertion point, and drill and tap manually. Use a round probe to check the screw hole without error, and insert the pedicle screw manually. Definitions of outcomes Sociodemographic data assessed at baseline (preoperatively) included age, sex, and BMI. The postoperative CT scans were acquired in 1 week after surgery. The primary outcome was the accuracy of the screw implantation, which was evaluated according to Neo scale assessed by the postoperative CT . Intraoperative blood loss, duration of surgery, and postoperative hospital stay length were also compared. The safety of pedicle screw implantation was evaluated based on postoperative complications, including cerebrospinal fluid leakage, spinal cord injury, nerve root injury, incidence of infection, vertebral artery injury. Neo scale Grade 0: screw completely within bone. Grade 1: cortical breach of < 2 mm. Grade 2: cortical breach of ≥ 2 mm and < 4 mm. Grade 3: cortical breach of ≥ 4 mm. Statistical analysis All statistical analyses were performed using the SPSS Statistics version 25.0 software (IBM, Armonk, NY). All tests were two-tailed with an α of 0.05.
In this case-control study, patients were recruited and managed at Jiangsu Provincial People’s Hospital (JSPH). The study commenced two years after the introduction of the Tianji Robot for spinal surgery at JSPH in order to avoid the learning curve . Inclusion criteria were (1) newly diagnosed cervical spinal disease requiring pedicle screw fixation; (2) patients undergoing posterior cervical internal fixation with CPS assisted by orthopedic surgical robot or traditional fluoroscopy. Exclusion criteria were the presence of (1) severe osteoporosis; (2) old fractures; (3) severe pedicle deformity; (4) cervical pedicle with diameter smaller than the screw diameter (3.5 mm); (5) preoperative CTA examination indicating that unilateral vertebral artery stenosis or atresia and (6) severe systemic disease or coagulation disorder. Other types of screws, such as lateral mass screws, were excluded.
From March 2021 to March 2024, altogether 95 patients were treated with posterior cervical spinal surgery using either Tianji orthopedic surgical robot-assisted or traditional fluoroscopy-assisted pedicle screw implantation technology, including 44 cases in the orthopedic surgical robot group and 51 cases in the traditional fluoroscopy group. The orthopedic surgical robot group consisted of 30 males, 14 females, aged from 23 to 82 years old, with a median age of 57 years. There were 24 cases of cervical fracture with cervical dislocation or not, 13 cases of cervical spinal stenosis, 1 case of benign intraspinal tumor, 3 cases of cervical malignancy, 1 case of cervical kyphosis, 1 case of congenital cervical deformity, and 1 case of basilar invagination in this group. The traditional fluoroscopy group consisted of 37 males, 14 females, aged from 40 to 81 years old, with a median age of 59 years. In this group, there were 22 cases of cervical fracture with cervical dislocation or not, 19 cases of cervical spinal stenosis, 5 case of benign intraspinal tumor, 5 cases of cervical malignancy.
The doctors of CPS implantation were senior spine surgeons with many years of experience in implantation of pedicle screws manually and were skilled in using robots for pedicle screw fixation. All participants received the pedicle screw fixation assisted by orthopedic surgical robot or conventional fluoroscopy.
The patient is placed in a prone position on a Jackson table after general anesthesia, with the head secured by a Mayfield frame, and the area is sterilized and covered with a sterile sheet. A longitudinal midline incision is made at the back of the neck, and the target segment’s spinous processes, pedicles, and facet joints are exposed by subperiosteal dissection. The human navigation recognition framework is fixed to the distal spinal process or the proximal head frame. The orthopedic surgical robot system (TINAVI Medical Technologies Co., Ltd., Beijing, China) with a 3D “C”-arm x-ray machine (Siemens Medical Solutions, Erlangen, Germany) is used for standard operating procedures to perform 3D scanning. After the scan is completed, the pedicle trajectory is planned on the robot workstation and the mechanical arm is controlled to the target area. A guide needle sleeve is inserted to confirm the pedicle entry point, and the cortical bone is ground off using a grinding drill at the entry point. A 1.2 mm diameter pedicle guide needle is drilled into the bone using an electric drill along the sleeve, and the procedure is repeated with the mechanical arm to insert all guide needles. After confirming the position of the guide needle through fluoroscopy, a 2.7 mm hollow drill is used to enlarge the hole along the guide needle direction, and the guide needles are removed and tapped. A round-headed probe is used to check the screw hole, and the 3.5 mm diameter pedicle screw is tightened(Figure ). Conventional Fluoroscopy-assisted Cervical Spinal Surgery: The patient is placed in a prone position after general anesthesia, with the head secured by a Mayfield frame, and the area is sterilized and covered with a sterile sheet. A longitudinal midline incision is made at the back of the neck, and the target segment’s spinous processes, pedicles, and facet joints are exposed by subperiosteal dissection. Refer to Abumi pedicle implantation method to confirm the pedicle screw insertion point . Use a grinding drill to remove the cortical bone at the insertion point, and drill and tap manually. Use a round probe to check the screw hole without error, and insert the pedicle screw manually.
Sociodemographic data assessed at baseline (preoperatively) included age, sex, and BMI. The postoperative CT scans were acquired in 1 week after surgery. The primary outcome was the accuracy of the screw implantation, which was evaluated according to Neo scale assessed by the postoperative CT . Intraoperative blood loss, duration of surgery, and postoperative hospital stay length were also compared. The safety of pedicle screw implantation was evaluated based on postoperative complications, including cerebrospinal fluid leakage, spinal cord injury, nerve root injury, incidence of infection, vertebral artery injury.
Grade 0: screw completely within bone. Grade 1: cortical breach of < 2 mm. Grade 2: cortical breach of ≥ 2 mm and < 4 mm. Grade 3: cortical breach of ≥ 4 mm.
All statistical analyses were performed using the SPSS Statistics version 25.0 software (IBM, Armonk, NY). All tests were two-tailed with an α of 0.05.
From March 2021 to March 2024, altogether 95 patients underwent posterior cervical spinal surgery (44 cases in the orthopedic surgical robot group and 51 cases in the traditional fluoroscopy group). The baseline sociodemographic characteristics and diagnosis were balanced between the two groups (Tables and ). The mean age of the overall study population was 57 years, 29.47% were women, and the average body mass index was 23.22 kg/m 2 . Regarding to the diagnosis, 48.42% of the overall study population were cervical fracture, 33.68% were cervical spinal stenosis, and 17.89% were tumor and others. According to the Neo scale, 74.4% of all 422 screw placements were perfect (Grades 0) and 89.1% were acceptable (Grade 0 + Grade 1). In the orthopedic surgical robot group, 77.2% of 272 CPS were Grade 0, 15.1% Grade 1, 5.9% Grade 2, and 1.8% Grade 3, and the acceptable rate was 92.3% (Table ; Fig. ). The traditional fluoroscopy group showed that 69.3% of 150 CPS were Grade 0, 14.0% Grade 1, 6.7% Grade 2, and 10.0% Grade 3, and the acceptable rate was 83.3% (Table ). Overall, compared with the traditional fluoroscopy group, the orthopedic surgical robot group had better accuracy in screw implantation that is, a higher acceptable rate of screws ( p = 0.0083). Especially for pedicle screws placed in C1, C2, and C4 vertebrae, the acceptable rate of C1 vertebral screw placement was significantly higher than that of the traditional fluoroscopy group ( p = 0.0195); for pedicle screws placed in C2 vertebrae, both its perfect rate ( p = 0.0238) and acceptable rate ( p = 0.0459) were significantly higher than those in the traditional fluoroscopy group; for pedicle screws placed in C4 vertebrae, its acceptable rate were significantly higher than that in the traditional fluoroscopy group( p = 0.018). There was no significant difference in the perfection rate or acceptance rate of other CPS. In addition, compared with the traditional fluoroscopy group, postoperative hospital stay was shorter in the orthopedic surgical robot group [7.432 ± 2.193 vs. 9.118 ± 5.102, p = 0.0447], but duration of surgery was longer [240(219,318) vs. 203(178,243) min, p = 0.0038], which may be related to more screws being implanted during surgery as well as robotic manipulation and intraoperative fluoroscopy procedures (Table ). There was no significant difference in intraoperative blood loss between groups ( p = 0.0872). In terms of postoperative complications, 2 cases of cerebrospinal fluid leakage and 1 case of decreased muscle strength occurred in the traditional fluoroscopy group. The 2 patients with cerebrospinal fluid leakage were released by continuous lumbar cistern drainage, suture of the drainage tube opening and other measures, and were discharged successfully after a few days. The patient with decreased muscle strength had a postoperative CT scan that showed a C4 right pedicle screw had entered the spinal canal, the Neo scale of which was Grade 3, and the left pedicle screw was Grade 2. The patient presented with a decreased grip strength in the left hand, and refused to undergo a second operation and chose to be discharged (Fig. ). No infection or vertebral artery injury happened in the traditional fluoroscopy group and the orthopedic surgical robot group. There was 1 case of cerebrospinal fluid leakage in the orthopedic surgical robot group, but it was not related with orthopedic surgical robot navigation. (Table )
Robotic technology has been used in manufacturing for decades, but it was only recently that it was applied to medicine, and it was even later that it received approval from the China Food and Drug Administration for clinical use in orthopedics. SpineAssist/Renaissance robot (Mazor Robotics, Caesarea, Israel) was firstly used for the spine, and some studies showed that screws that were implanted using the SpineAssist/Renaissance robot were successfully and accurately implanted . ROSA (Medtech, Montpellier, France) is another robot used in spinal surgery, and was reported to perform transforaminal lumbar interbody fusion (TLIF) accurately and safely . Cervical spinal surgeon involving orthopedic surgical robotic assistance for CPS implantation may be associated with potential advantages relative to conventional traditional fluoroscopy-assisted CPS implantation. A meta-analysis involving 6 studies, including 2 controlled studies, showed a total of 482 cervical screws were placed with the use of a surgical robot, and 78.6% were CPS. 471 of 482 cervical screws (97.7%) achieved a clinically acceptable grade (a < 2-mm screw breach through the cortex) and yielded an average screw deviation of 0.95 mm . Our study showed that the orthopedic surgical robot group implanted CPS more accurately than the traditional fluoroscopy group, with higher overall perfect rate and acceptable rate. It was worth noting that, the orthopedic surgical robot group performed particularly well in upper cervical vertebrae(C1-C2). Before our study, the Tianji robot had performed robot-assisted C1-C2 transarticular screw fixation for atlantoaxial instability and robot-assisted odontoid fracture fixation and got excellent results as the first reported clinical applications of robot-assisted cervical spinal surgery . Compared with the traditional fluoroscopy group, postoperative hospital stay was significantly shorter in the orthopedic surgical robot group ( p = 0.0447), which was consistent with a previous study by Fan et al. Because the orthopedic surgical robot group had fewer complications, only one case of cerebrospinal fluid leakage (not related with orthopedic surgical robot), patients recovered faster and had shorter postoperative hospital stay. Cervical fractures are often associated with other injuries, which can significantly impact postoperative hospital stay. In this study, the proportion of cervical spine fractures between the two groups did not demonstrate a statistically significant difference. It is noteworthy that patients had previously undergone management for severe life-threatening injuries, including traumatic brain injuries, organ damage and others, before receiving cervical spine surgery. A review indicated that the average postoperative hospital stay for posterior cervical surgeries was 5.7 days . In contrast, the average postoperative hospital stay in this study was 8.3 days, which was relatively prolonged. We believed this extended duration was primarily due to the higher proportion of cervical fracture, which often necessitate additional orthopedic interventions before discharge. However, the duration of surgery of the orthopedic surgical robot group was significantly longer than the traditional fluoroscopy group( p = 0.0038). The increased surgical time could be partly attributed to the intraoperative preparation phase and additional intraoperative CT images of the patient are needed. Besides, the number of CPS inserted during surgery tended to be higher in the orthopedic surgical robot group (6.18 CPS per case) than in the traditional fluoroscopy group (2.91 CPS per case). This study focused solely on CPS. For safety reasons, when it was difficult to implant CPS, lateral mass screws were chosen instead, which was primarily the case in the traditional group. In addition to the duration of surgery, we believe that the number of pedicle screws may increase intraoperative blood loss (not significant in this study), but has no significant impact on postoperative hospital stay. When CPS was not suitable, lateral mass screws would be implanted, which had a minimal impact on surgical time and intraoperative blood loss, and no significant effect on postoperative hospital stay. In the total cost of the surgery, in addition to the higher expenses caused by more screws, the use of the Tianji robot incurs an additional charge of 27,000 RMB, which will be adjusted according to healthcare insurance policies. In the present study, the intraoperative blood loss did not differ significantly between two groups. Likewise, in the study by Fan et al., the intraoperative blood loss in the orthopedic surgical robot group who underwent open surgery do not differ significantly from that of the traditional fluoroscopy group . Nevertheless, a few studies showed that the robot-assisted minimally invasive percutaneous cervical spinal surgery led to significantly less intraoperative blood loss, and this can be a huge advantage of the orthopedic surgical robot . Besides, the incidence of postoperative complications was similar between two groups ( p = 0.6211). In the meta-analysis, after excluding the studies involving other types of cervical screws, the acceptable rate of CPS was 96.9% . Factors contributing to deviation in robot-assisted surgery include slippage on the bone surfaces of the entry point, surgeon’s surgical technique, marker displacement, respiratory amplitude and muscle stretching . Compared with the reported meta-analysis of robot-assisted CPS fixation, our study had a lower acceptable rate (92.3% vs. 96.9%) . Further reason may be due to the lateral deviation caused by muscle traction during the open surgical procedure. It has been reported that the use of percutaneous needle placement may reduce the effects of muscle traction and improve nail placement accuracy . However, our study had a larger sample size (number of CPS) than other studies, so it has certain persuasion. Several limitations existed in our study. First, this study was a single-center study, so a multi-center study is needed for more convincing results. Second, although the study showed improved accuracy for screw implantation in the orthopedic surgical robot group, long-term follow-up is needed to confirm the better prognosis. Third, although the total number of CPS was relatively enough, the pedicle screws implanted in C3 and C4 were relatively few. In conclusion, this retrospective study showed that the accuracy of spine surgery with CPS implantation using orthopedic surgical robot-assisted technique tended to be superior to traditional fluoroscopy-assisted technique, while maintaining comparable safety at the same time.
|
Appendage abnormalities in spiders induced by an alternating temperature protocol in the context of recent advances in molecular spider embryology | fd577a25-0cda-42cf-b810-be381c76ac89 | 10493090 | Anatomy[mh] | In natural aquatic and terrestrial habitats animals with body deformities are relatively common. This observation applies particularly to arthropods, including crustaceans, insects, myriapods, and chelicerates ( e.g ., ; ; ; ; ; ; ; ; ; ). Since malformed arthropods are found purely by chance— e.g ., during field research—the causes of their abnormalities remain unknown. Various hypotheses have attempted to explain the origin of these defects, sometimes affecting only one body part or organ, with a variety of physical, mechanical, chemical, and biological factors proposed ( e.g ., ). Potential teratogenic factors can be tested in laboratory experiments using invertebrates, including species considered models for study . A number of chemical reagents ( e.g ., , ; ; ; ), radiation , high humidity , low/high temperature , and mechanical disturbance/manipulation have already been exploited in teratology research. For instance, hypothesized a teratogenic effect of temperature on spiders and later ( e.g ., ) investigated the effect of supraoptimal temperature on embryogenesis in harvestmen (Opiliones). Subsequently, the research was extended by using abrupt temperature changes during the incubation of Eratigena (formerly Tegenaria ) atrica (C. L. Koch, 1843) embryos ( e.g ., ; ; , ; , ; ). It was observed that the application of alternating temperatures (lower and higher than the optimum) during early embryogenesis could lead to a range of deformities in both body tagmata. The most severe defects led to high embryo mortality or made it difficult for embryos to hatch to the postembryo stage. Moreover, some hatched but deformed individuals were unable to lead a normal life and achieve reproductive success. Anomalies described in E. atrica have included: oligomely (absence of one or more appendages), symely (fusion of contralateral appendages), schistomely (bifurcation of appendages), heterosymely (fusion of ipsilateral appendages), polymely (presence of one or more additional appendages), bicephaly (partial prosomal duplication), and so-called complex anomalies (two or more categories of anomaly occurring simultaneously) ( e.g ., ; ; ; ; , ; ; ; ). At the Nicolaus Copernicus University in Toruń, Poland, teratological research on spiders by application of a thermal factor has been carried out since the 1970s ( e.g ., ). Since then, various anomalies have been described, with new cases recorded every year. Although early studies focused mainly on morphological description of teratologically altered individuals, attempts to explain the causes of deformities were also made. , suggested that many induced anomalies seen in late embryos or postembryos were presaged by structural aberrations evident in embryos as early as the blastoderm stage. For instance, thermal shocks led to the appearance of gaps in the blastoderm that proposed may eliminate some embryo fragments, causing oligomely. On the other hand, it was suggested that symely or heterosymely could result if thermal treatment brought parts of the blastoderm closer together than was normal. However, also noted that anomalies appearing well into embryonic development, e.g ., during the limb bud elongation stage (see ), were not necessarily preceded by obvious structural abnormalities in earlier stages. This demonstrates that interpreting teratologies only in mechanistic terms is at best insufficient. Over the last quarter-century, molecular techniques have been enlisted for the functional analysis of genes involved in the development of body segmentation and appendage formation in spiders. In particular, extensive use of in situ hybridization, RNA interference (RNAi), and immunolabeling have provided much insight into the expression of many developmental regulatory genes during spider embryogenesis ( ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; to cite but a few such studies). This work has demonstrated various spider anomalies that result from the suppression or misexpression of specific segmentation and appendage patterning genes, raising the expectation that at least some abnormalities induced by thermal shock to embryos will be explicable in these terms. This does not preclude the possibility of mechanisms less directly related to gene expression, but also affected by temperature, also or alternatively being involved in creating defects. Thus, in this report, aberrant gene expression is broadly construed to include abnormal expression resulting from such temperature-influenced effects as atypical cell migration, cell division, cell death, and changes in metabolism. As such, overall expression of a gene could potentially be quantitatively normal but still present as abnormal phenotypes if positional or temporal perturbations to expression deviate substantially from normal. In the 2020/2021 breeding season, using alternating temperatures during early embryogenesis of E. atrica , we obtained 212 postembryos with various body deformities. Since many of these anomalies have already been described in our previous works, we focused on those observed for the first time or those particularly relevant to evo-devo research, such as the rare cases where an appendage is found on the pedicel (petiolus, petiole) that connects the prosoma to the opisthosoma. Regarding the latter, appendages on one or both sides of the pedicel in postembryos of E. atrica were first described by , . Some postembryos so afflicted did not survive beyond this stage, but for those that did, differences in the longevity of these appendages were later noted in : in one individual the appendage disappeared after the postembryo molted, while in another, a short, two-podomere appendage was present until the 6 th stadium. additionally described a postembryo of E. atrica with one substantial limb on the pedicel. Initially composed of four podomeres, by the 5 th stadium it resembled, in form and segmentation, a complete walking leg, albeit distorted. During the 6 th molt the leg broke off at the trochanter-femur joint. It grew back starting with the 7 th molt, but the spider died during the 9 th molt when loss of the leg re-occurred, accompanied by substantial loss of hemolymph. proposed that the presence of an appendage on the pedicel is an atavistic trait. They also questioned whether the pedicel in spiders is correctly considered the first segment of the opisthosoma, since it has the potential to develop appendages similar in size and structure to walking legs. Our study was aimed at further documenting the diversity of developmental anomalies that can be induced in E. atrica by applying the alternating temperature protocol to embryos. We also sought to consider abnormalities like those seen in the 2020/2021 breeding season in terms of potential errors in developmental gene expression. For the latter, we reviewed the literature related to the expression of such genes with a primary focus on functional studies that have employed RNAi to knock down specific genes in spiders.
Teratological experiments on embryos of the spider Eratigena atrica (C. L. Koch, 1843) were carried out in the 2020/2021 breeding season. In September 2020, 32 sexually mature females and 24 males were collected from the vicinity of Toruń, Włocławek, and Chełmża, Poland. In the laboratory each individual was placed in a 250 cm 3 well-ventilated glass container, kept in a darkened room. A temperature of 21 °C and a relative humidity (RH) of about 70% were maintained in the room throughout the experiment. Spiders were fed Tenebrio molitor larvae twice a week and water was supplied in soaked cotton balls. After 3 weeks, a male was introduced to each female for insemination. This procedure was repeated several days later with a different male to help ensure that all females were inseminated. First egg sacs were laid after a few weeks, followed periodically by additional egg sacs, averaging seven or eight egg sacs per female (in two previous breeding seasons) with up to 19 egg sacs constructed by a single female. All egg sacs were immediately removed from the containers and cut open to remove eggs, which were then counted and evenly divided into two groups: an experimental group and a control group. To verify that most eggs were fertilized, three randomly selected eggs per egg sac were immersed in paraffin oil and inspected. Embryos from the control group were incubated at a temperature of 22 °C and 70% RH until hatching to the postembryo occurred, while embryos from the experimental group were exposed to alternating temperatures of 14 °C and 32 °C. The temperature was changed every 12 h for 10 days, until segments of the prosoma appeared on the germ band and limb buds appeared on these segments (comparable to Stage 9 in the trechaleid Cupiennius salei (Keyserling, 1877) ; Stage 8.2 in the theridiid Parasteatoda (formerly Achaearanea ) tepidariorum (C. L. Koch, 1841) . Subsequently, incubation was continued using the same conditions applied to the control group. After hatching, postembryos from both groups were examined for abnormalities in the prosoma and opisthosoma. Deformed individuals were photographed using a Zeiss Axiocam 105 color CMOS camera mounted on a Zeiss Axio Lab A1 light microscope and operated with Zen software (Version 2.3, blue edition). We gathered references that present results of spider RNAi experiments, as they might shed light on potential gene misexpression leading to appendage abnormalities, as induced by the alternating temperature protocol.
In the 2020/2021 breeding season, we obtained approximately 10,000 eggs/embryos, half of which constituted the control group. In this group, no hatched individuals with developmental defects were found, all postembryos having a properly developed prosoma with appendages and an opisthosoma with no observed abnormalities . Approximately 10% of these controls failed to hatch, though development proceeded far enough in some that their fertilized status was apparent. The remainder, however, no doubt included some unfertilized eggs, with a comparable number presumably present in the experimental group, though we do not know what this number was. Eggs (100 or more) in the first egg sac built by a female are usually all fertilized or nearly so, but in subsequent egg sacs, which contain fewer total eggs, there are typically higher percentages of unfertilized eggs. In the experimental group embryo mortality was much higher. About 40% of all embryos died at various stages of development; some failed to hatch from their eggshells even though their embryonic development appeared complete. In total, 3,007 postembryos were obtained in this group. Among these, individuals with a normally developed body structure predominated (2,795; 93%). The remaining postembryos (212) had various defects, most of which affected the prosoma and its appendages, although in nine individuals (4% of abnormal postembryos) deformities were also found in the opisthosoma. Oligomely was, by far, the most frequent anomaly, but multiple examples of each of several other types of anomaly—heterosymely, bicephaly, schistomely, symely, and polymely—were also obtained . Moreover, >30% of postembryos displaying abnormal phenotypes did not fall neatly into one of these five types. They included individuals with complex anomalies, i.e ., with multiple defects of more than one type, and those with abnormalities not conforming to any of these five types, grouped in as ‘Other abnormalities’. The latter group included postembryos with significantly shortened or deformed appendages. Since many of the observed deformities have already been described in our previous studies, we present only selected cases, either recorded for the first time or of two postembryos with a short appendage on the pedicel , constituting the only instances of polymely observed during this breeding season . The complex anomaly in the spider in affected only the right side of the prosoma while the left side was formed normally with six well-developed, segmented appendages: chelicera, pedipalp, and four walking legs (L1–L4). On the right side of the prosoma the chelicera was missing (oligomely) and two appendages emerged from the gnathocoxa (= gnathendite = gnathobase = endite = maxilla), a normal pedipalp and a short protuberance (labeled ‘a’ in ) that lacked segmentation and moved independently. The legs were normally developed. The spider in was likewise affected by a complex anomaly on the right side of the prosoma only. The chelicera was represented by a small, mobile protuberance (labeled ‘a’ in ) and the pedipalp was absent (oligomely). The legs had a normal structure. The spider in was also affected by oligomely, the deformity most frequently observed in the teratological material. On the right side of the prosoma this individual had a well-developed chelicera and pedipalp, but only three legs. On the left side of the prosoma there was a complete set of appendages. Bilateral oligomely, though less common, was also observed in the teratological material. This anomaly affected the spider shown in . On the right side of the prosoma there were five appendages—a chelicera and four legs—with the pedipalp missing. On the left side of the prosoma there were also only five appendages—a chelicera, pedipalp, and three legs; one leg was missing. The individual in was affected by schistomely of leg L2 on the right side of the prosoma. The bifurcation started in the middle of the metatarsus and included the tarsus. The schistomely was symmetric in that the two distal ends (‘a’ and ‘b’ in ) were about the same length. The remaining appendages, including chelicerae and pedipalps, showed no irregularities. The spider in had an especially unusual anomaly that affected leg L4 on the left side of the prosoma, presenting as a widened coxa from which only two short branches (‘a’ and ‘b’ in ) projected. The only visible segmentation on these branches was a single articulation, possibly demarcating the trochanter. This anomaly may represent schistomely initiated proximally within the developing leg, forestalling much further development. On the right side all appendages were well developed. and present a rare anomaly. These two postembryos had a very short appendage on the pedicel that connects the prosoma and opisthosoma. This additional appendage was on the left side of the pedicel. In both cases no other abnormalities were apparent. In the spider shown in , the shortened appendage had the thickness of a walking leg, but it was not segmented. It had two small, rounded protrusions located prolaterally and distally (‘a’ and ‘b’ in ). In the spider in , the appendage on the pedicel was of similar length and (proximally) width to that on the other specimen, and it was segmented to the extent that the first podomere (coxa) could be distinguished. The appendage widened distally, ending in an uneven surface with several bumps.
Using the established thermal method for inducing developmental abnormalities in spider embryos , we obtained 212 individuals with body defects in the 2020/2021 breeding season, representing 7% (212/3,007) of the successfully hatched postembryos, and about 4% (212/5,000) of the embryos (hatched and unhatched), in the experimental group. These fairly low percentages suggest that spiders, as ectotherms, possess mechanisms that help make them relatively resistant to sudden temperature changes. One such mechanism likely includes the expression of heat shock protein ( Hsp ) genes, encoding protein-folding chaperones. It has been shown that the expression of Hsp genes significantly increases in response to various environmental stressors, including high temperature ( and references therein). Other mechanisms are presumably also involved as some induced morphological aberrations can be successfully eliminated by embryonic self-regulation and regeneration processes ( ; ; ; ; references therein). But the high mortality among experimental embryos (40%) as compared to control embryos (10%) also suggests a relatively high percentage of induced abnormality in the experimental group, severe enough to prevent hatching. It therefore appears that the alternating temperature protocol was effective in disrupting normal development in about one-third of embryos, causing a range of developmental anomalies and high embryo mortality. This thesis is supported by an absence of developmental defects and low embryo mortality within the control group. We have noted a trend for mortality percentages in both control and experimental groups to rise over the past decade , from a low of 4% and 20%, respectively , to the present study’s high (10%, 40%, respectively). Conversely, the percentage of successfully hatched postembryos in the experimental group that exhibited defects has shown a downward trend over the same period, from highs of 17–18% to a low of about 4% , rebounding moderately in the present study with 7%. These opposite trends in the experimental group could be related: if a larger percentage of embryos adversely affected by the alternating temperature protocol fail to hatch, a smaller percentage of defective individuals may remain among the embryos that hatch successfully. As yet we have no explanations for these trends. We have not knowingly made any changes to our procedures in this period. Field-collected adults have been captured, and control group spiderlings released, in the same locations throughout this period. Conceivably, our collection and release activities at these sites may be generating or contributing to the trends. This can be investigated by comparing, during the same breeding season, mortality and defect percentages between the progeny of adults obtained from our usual collection/release sites with progeny of adults collected from distant virgin sites. Other factors potentially contributing to the observed trends, such as the influence of climatic changes on reproduction in E. atrica , may also be profitably investigated. In the teratological material, oligomely was the most frequent anomaly by a large margin, accounting for about 55% of cases, and it was even more prevalent considering that oligomely was a component in some postembryos ( e.g ., and ) categorized as having ‘Complex anomalies’ . Other anomaly categories were observed much less frequently, which agrees with the results of previous studies. If we express percentages by considering only the six conspicuous single anomaly categories ( i.e ., discounting ‘Complex anomalies’ and ‘Other abnormalities’ categories) as they occurred on prosomata and pedicels, cases of oligomely accounted for 79.6% of defects in this study. This percentage, across five earlier studies , ranged from 73.5–84.8%. In contrast, percentages for the other five single anomaly categories were (given as % for this study followed by % range in the five earlier studies): heterosymely, 8.2%, 4.9–10.4%; schistomely, 4.1%, 2.2–9.9%; bicephaly, 4.8%, 0–6.5%; symely, 2.0%, 0–7.8%; polymely, 1.4%, 0–3.7%. It remains to be determined why instances of oligomely dominate among teratological postembryos that have been subjected to the alternating temperature protocol as embryos. One important consideration is that percentages of different anomaly types presented in this and earlier studies reflect their occurrence in successfully hatched postembryos. Thus, the first step in addressing the question of oligomely prevalence is to determine if these percentages agree with percentages of defect types as they exist in the embryo stage. It is possible that oligomelic embryos are more likely to survive and successfully hatch than embryos exhibiting other defect types and consequently oligomely is better represented among postembryos than among embryos. We therefore intend to explore the feasibility of ascertaining anomaly types on a large scale in late embryos. Oligomelic postembryos Molecular embryological research has suggested alterations to normal gene expression that might account for some instances of appendage loss. Parental RNAi (pRNAi) studies, especially in P. tepidariorum , have revealed a range of abnormal phenotypes from knockdown of selected developmental genes , depending on the specific gene suppressed and on the degree of suppression of a given gene within different embryos. These phenotypes can include embryos exhibiting oligomely, though in some instances lethal abnormalities co-occur, indicating that widespread down-regulation of the targeted genes does not account for oligomelic postembryos like those in – . For example, knockdown of the Notch-signaling-pathway component Delta in P. tepidariorum ( Pt-Delta ) or the spider gap gene Pt-Sox21b.1 results in loss of leg-bearing segments, but this is accompanied by loss of all opisthosomal segments. More localized suppression, however, comparable to that achieved by embryonic RNAi (eRNAi) , cannot be ruled out in oligomelic postembryos. As an aside, conspicuously lethal consequences of gene downregulation, as occur with knockdown of genes such as Pt-Delta and Pt-Sox21b.1 , are potentially relevant to the high mortality that was observed in experimental E. atrica embryos. Equally lethal, though less conspicuous, is embryonic development that, superficially, proceeds essentially to completion without obvious defect, but the embryo nevertheless fails to hatch. Embryos like these were among the 40% of the experimental group that did not hatch. It is thus worth noting that fully developed embryos, not exhibiting defects but unable to hatch, were produced with high frequency when three transcription factors, Pt-foxQ2 , Pt-six3.1 , or Pt-six3.2 , were individually suppressed by pRNAi . We hasten to emphasize, however, that there may well be many mechanisms by which this inability to hatch is produced, with lethal consequence, including some unrelated to abnormal gene expression. Here and elsewhere in this discussion, in noting similarities between phenotypes obtained by thermal shock and by RNAi of specific genes, the involvement of those genes in producing the thermally-induced abnormalities, while a possibility, is by no means assured and certainly no such definitive claim is intended. More likely to be involved in appendage losses like those in are genes that, when knocked down, result in oligomelic embryos able to survive hatching. Examples of two such genes, expressed during early embryogenesis within the period our thermal treatment is applied, are the gap gene hunchback ( hb ) and Distal-less ( Dll ), an appendage patterning gene that also plays an earlier gap gene role in spiders . pRNAi of hb in P. tepidariorum ( Pt-hb ) yielded postembryos missing the L2 leg pair or both L1 and L2 legs , while, similarly, pRNAi of Pt-Dll produced postembryos lacking the L1 leg pair or both L1 and L2 legs . These losses reflected loss of the segments on which the legs would have developed and it was only segments bearing walking legs that were so affected , reflecting the distinction between segmentation of the head region, with its chelicerae and pedipalps, and that of the thorax region, with its four pairs of legs . If this also applies to E. atrica , then abnormal suppression of Ea-hb would not contribute to oligomely involving chelicerae and pedipalps ( , , and ), but it could be a factor in spiders with missing legs ( and ). The same can be said for Ea-Dll suppression during its early involvement with prosomal segmentation (its gap gene role) . Later suppression of Dll in limb buds , whether preceded by early Dll suppression (pRNAi; ) or not (eRNAi; ; ), resulted in truncated appendages but not in any additional appendage loss. There are, however, two confounding considerations where potential abnormal Ea-hb or Ea-Dll expression is concerned: (1) Though did note left-right leg reduction asymmetry in Pt-hb pRNAi embryos, leg losses resulting from prosomal segment losses have usually been symmetric , whereas the alternating temperature treatment applied in this study has often yielded asymmetric ( and ), as well as symmetric ( e.g ., ), leg oligomely. (2) We have not been able to determine which legs specifically have been missing in oligomelic postembryos, even after examining leg neuromeres in histological sections , and therefore we do not know if leg losses have been consistent with Ea-hb or early Ea-Dll suppression. Regarding (1), if, speculatively, Ea-hb or Ea-Dll is inhibited by our alternating temperature protocol (at this point possibilities only), such inhibition might be more localized, random, and asymmetric than that often resulting from pRNAi. Indeed, unilaterally oligomelic E. atrica with corresponding unilateral losses of leg nerves and ganglia indicate that thermally-induced disturbances result in losses of hemisegments more often than of full segments . This is reminiscent of asymmetric prosomal appendage shortening that has been induced in C. salei by knockdown of Cs-Dll using eRNAi and of seven-legged postembryos that have occasionally resulted from Pt-Dll pRNAi, indicating loss of a single L1 hemisegment . noted the median furrow (ventral sulcus) that divides the right and left halves of the embryonic germ band , and the seemingly independent development of the two halves, as a possible explanation for such asymmetric phenotypes. Regarding (2), future studies could explore a strategy used by for ascertaining the identity of missing legs: for oligomelic postembryos able to molt successfully to at least 1 st instars, the number and arrangement of slit sense organs on the sternum, compared to control spiders, should help identify the missing legs and provide an alternative to histological sectioning for indicating if symmetric/asymmetric oligomely of legs is accompanied by loss of an entire segment/hemisegment, as previously suggested based on histology . We should also note that, unlike RNAi experiments, in which the gene targeted by treatment is known, genes most directly impacted by application of the alternating temperature protocol may be cofactors, upstream regulators, or downstream targets of genes discussed here as being potentially perturbed by the protocol, rather than directly affecting expression of the candidate gene itself. For example, pRNAi of the transcription factor Sp6-9 in P. tepidariorum ( Pt-Sp6-9 ) has been observed to reduce or eliminate Pt-Dll expression as well as eliminate expression of the segment polarity gene Pt-engrailed-1 ( Pt-en-1 ) in the L1 and L2 segments , similar to the effect of Pt-Dll pRNAi on Pt-en-1 expression . Resulting phenotypes included embryos missing these two segments and, so, also the legs that would form on them . Thus, in this example, defects consistent with inhibited Ea-Dll expression could, hypothetically, arise via thermally-induced direct disruption to a different member of the same gene network, namely Ea-Sp6-9 . Also, genes most directly affected may vary among embryos depending on, e.g ., the exact timing of a temperature switch in relation to an embryo’s stage of development. It is also worth repeating that thermally-induced perturbations to normal gene expression might have abnormal spatial or temporal components in addition to, or rather than, quantitative aberrations. On first consideration, missing pedipalps, as in and , could suggest disturbance to the normal expression of the Hox gene labial ( lab ), specifically the paralog lab-1 ( lab-B in ), first expressed at Stage 4 in P. tepidariorum . Its knockdown by pRNAi can result in postembryos lacking pedipalps, though, unlike leg losses that are due to loss of the corresponding prosomal segments, the pedipalpal segment is retained . On the other hand, like the above pRNAi-induced leg losses, pedipalp loss as seen in Pt-lab-1 pRNAi postembryos has been symmetric , whereas the alternating temperature treatment more often results in asymmetric pedipalp oligomely in E. atrica ( and ), suggesting a potential localized disruption to Ea-lab-1 expression. However, an abnormal postembryo like that shown in , in which the site of a missing pedipalp is adjacent to a greatly reduced chelicera (labeled ‘a’), does not support this suggestion if we assume a shared genetic cause for both anomalies (this assumption is by no means certain). This is because expression of lab-1 (or any of the Hox genes) is not involved in specifying chelicera morphology . An alternative explanation that might encompass both defects has not yet emerged from functional studies in spiders. The gene dachshund-2 is expressed proximally in both chelicerae and pedipalps, but the only noted phenotypic consequences of its knockdown by pRNAi in P. tepidariorum are malformed patellae in the walking legs . Two paralogs of extradenticle ( exd-1 , exd-2 ) and homothorax-1 ( hth-1 ) are also expressed proximally in pedipalps and chelicerae , but exd has not been the subject of RNAi experiments in spiders, or any chelicerates , and among chelicerates hth function has only been examined by eRNAi in the harvestman Phalangium opilio Linnaeus, 1758 . However, studies in insects and spiders indicate that exd-1 and hth-1 of spiders are functionally linked (Hth-1 required for translocation of Exd-1 into the nucleus), such that knockdown of either gene would likely produce similar, though not identical, phenotypes ( ; ; references therein). Phenotypes resulting from knockdown of the single-copy hth in P. opilio ( Po-hth ) included homeotic transformations of chelicerae and pedipalps to leg identities, appendage truncation, and fusions between chelicerae and pedipalps, though, importantly, apparently not pedipalp oligomely (the results do, however, state “The labrum and/or some appendages also failed to form” (among Class I phenotype embryos) without elaboration). Interestingly, like the aforementioned defect asymmetry observed in Cs-Dll eRNAi C. salei embryos , a high incidence of asymmetric defects was also obtained with Po-hth eRNAi P. opilio embryos . As mentioned, asymmetric defects are likewise often obtained by the alternating temperature protocol. These three examples demonstrate that aberrations on one side of the germ band do not necessarily affect the other side and suggest limited, non-global perturbations to gene expression or other developmental processes. Postembryos with schistomely or in ‘Other abnormalities’ category Appendage development relies on differentiation along proximal-distal (P-D), dorsal-ventral (D-V), and anterior-posterior (A-P) axes, the last especially little studied in spiders. Genes involved with establishing these axes may be susceptible to thermally-induced abnormal expression, resulting in limb malformations. For example, a key player in establishing the D-V axis is the gene FoxB , encoding a forkhead box transcription factor that is ventrally expressed within appendages . Its knockdown in P. tepidariorum by pRNAi resulted in greatly reduced hatching success and altered expression of downstream genes that normally show ventral ( wingless ( Pt-wg / Wnt1 ), Pt-H15-2 ), dorsal ( optomotor-blind ( Pt-omb )), and distal ( decapentaplegic ( Pt-dpp )) expression within appendages, resulting in ‘dorsalized’ legs and pedipalps . Such Pt-FoxB pRNAi embryos that were able to hatch successfully and progress to the 1 st stadium exhibited distally crooked legs and pedipalps, comparable to some postembryos included in our ‘Other abnormalities’ category . This category also included postembryos with significantly shortened appendages, a phenotype that has also been observed in mildly affected Pt-Sp6-9 pRNAi embryos and postembryos, and has included asymmetric defects . Appendage bifurcation, i.e ., schistomely ( and ), in postembryos might also be considered in terms of erroneous expression of genes modeling the appendage axes, with schistomely representing distal duplication of the P-D axis . Though functional data ( e.g ., RNAi) are lacking in chelicerates , expression data in P. tepidariorum for dpp and wg / Wnt1 , among other evidence from spiders and other arthropods , have been consistent with dpp and wg/Wnt1 expression early in spider appendage development initiating a gene cascade that generates the P-D axis . In legs and pedipalps, three distinct domains of expression establish the P-D axis via expression of Dll distally, dachshund-1 ( dac-1 ) medially, and exd-1 / hth-1 proximally . Disturbances in the normal expression of dpp , wg / Wnt1 , or their downstream targets caused by thermal shocks may result in a duplication of the P-D axis. In a report of cheliceral schistomely in the spider Tetragnatha versicolor Walckenaer, 1841, hypothesized that the defect could be replicated by introducing ectopic Dpp and Wg/Wnt1. The schistomely shown in , at the distal end of a leg, suggests perturbations that included direct or indirect abnormality in Dll expression while the more proximal schistomely indicated in , on a noticeably wider appendage than the normal legs, potentially represents abnormal expression of dpp , wg / Wnt1 , and dac-1 (among other possibilities), the latter’s expression coincident with the trochanter and femur . Postembryos exhibiting pedicel polymely Arguably the most interesting cases from the perspective of evolutionary/developmental biology involve two individuals with an appendage on the pedicel (first segment of the opisthosoma, O1; in spiders, coincident with somite VII) that are presented in and . Appendages do not usually form on the O1 segment in spiders and such defects are rare even among E. atrica subjected to alternating temperatures as embryos. Within this segment, the principal Hox genes expressed are the two paralogs of Antennapedia ( Antp ) . Knockdown of Antp-1 in P. tepidariorum ( Pt-Antp-1 ) by pRNAi has demonstrated that it is responsible for repressing the development of legs on the O1 segment . At its most severe, this down-regulation of Pt-Antp-1 resulted in sufficient de-repression of leg development in O1 that 10 walking legs formed; the usual eight plus a pair on the pedicel that were like the former morphologically and in lateral placement except a little shorter and thinner ( ; replicated by ). Expression of the genes that establish the P-D axis in legs ( Pt-exd-1 , Pt-hth-1 , Pt-dac-1 , Pt-Dll ) was nearly identical between the ectopic O1 legs and normal L1–L4 legs. Moreover, expression of the Hox genes Deformed-A ( Pt-Dfd-A ) and Sex combs reduced-B ( Pt-Scr-B ; paralogs as designated in ) within the 10 legs indicated that the ectopic legs on O1 were not homeotic copies of any of the normal walking legs, but they were instead true O1 segment de-repressed legs . It is of interest that obtained not only severely affected postembryos with a pair of complete legs on the pedicel following knockdown of Pt-Antp-1 , but in more moderately affected individuals they observed only short leg-like projections on the pedicel. Further, in a triple pRNAi experiment (to suppress Pt-Antp-1 and two other Hox genes), they obtained two postembryos with an incomplete appendage on just one side of the pedicel. They attributed this asymmetric (“mosaic”) phenotype to the lesser quantity of each dsRNA that could be injected when attempting to inhibit three genes simultaneously, resulting in less effective suppression of Pt-Antp-1 . This range of outcomes is again reminiscent of the results obtained when alternating temperatures are applied to embryos of E. atrica , where appendages may form on the pedicel symmetrically or only on one side , and these appendages may exhibit little or considerable development, from a short, unsegmented projection to a segmented, essentially complete leg ( , ; ; this study). This suggests that the alternating temperature protocol has the potential to disturb, to varying extent, normal expression of Ea-Antp-1 or associated up- or downstream genes in the O1 segment. There is a long history of embryological observations on spiders that indicates an ancestry in which appendages were present on somite VII ( e.g ., ; ; ; ; ; ). Principally, this is indicated by a small, short-lived protuberance or patch, sometimes explicitly interpreted as an incipient limb bud, appearing on each O1 hemisegment when the opisthosomal limb buds develop. These transient O1 limb buds apparently do not form in all spider taxa , however, as they have not been noted in some detailed embryological studies . It is notable that putative limb buds on O1 have been observed in Heptathela , a member of the basal Mesothelae, as well as in several members of the derived araneomorph RTA clade, to which E. atrica belongs . Considering that small, transitory protrusions (potential appendages) may appear on the pedicel (O1) segment in embryonic spiders, and that by use of targeted gene suppression (pRNAi) it is possible to obtain appendages on the pedicel with the structure of walking legs that nevertheless have their own O1 identity , it might be worth reconsidering whether somite VII, the pedicel, is indeed the first segment of the opisthosoma, as it is usually described, rather than the last segment of the prosoma. This thought is stimulated by another result obtained by ; that limb repression also occurs as a normal part of development in the O2 segment (somite VIII), but when the genes that redundantly promote this repression ( Pt-Antp-1 , Ultrabithorax-1 ( Pt-Ubx-1 )) are suppressed by double pRNAi, the ectopic appendages that form on O2 appear far more vestigial than the legs induced to form on O1. This may reflect less effective overall de-repression in O2 because of the repression redundancy present in O2, not shared by O1, but it could also conceivably reflect an early euchelicerate ancestry in which appendages on somites VII and VIII differed substantially in morphology, with those on VII more limb-like and those on VIII more plate-like, suggestive of a border between tagmata. Such a difference in appendage morphology has been interpreted for the Devonian euchelicerate Weinbergina and is also seen in extant Xiphosurida (horseshoe crabs) . Applying :4) definition of a tagma as “…a distinct and discrete morphological region that comprises a series of equivalently modified appendages that constitute a unit of specific form…or sometimes function…”, the traditional view of the O1 segment as part of the spider opisthosoma seems appropriate. Both the normally legless condition of the pedicel and the maneuverability it imparts to the rest of the opisthosoma suggest a form and function more in keeping with those of the opisthosoma. In addition, during spider embryogenesis, the germ band initially divides into the prosomal segments and a posterior ‘segment addition zone’ (SAZ) from which the opisthosomal segments, including O1, subsequently derive in anterior-to-posterior sequence . These differing paths to segmentation in the two tagmata also favor an opisthosomal identity for the O1 segment. On the other hand, and acknowledge that establishing borders between tagmata can be difficult because the ends of a tagma and their associated appendages may differ substantially from the rest of the tagma. The border between prosoma and opisthosoma, with somite VII’s questionable affiliation, is given as a prime problematic example . They review evidence from fossil and extant chelicerates that supports a chelicerate groundplan in which somite VII is prosomal, as suggested by . This possibility is further supported by the potential for appendages with leg-like morphology to develop on the spider pedicel, whether induced by application of pRNAi or alternating temperatures, and, along with transitory limb bud formation on the O1 segment in some spiders, suggests loss of somite VII appendages present in basal euchelicerate ancestors of arachnids . Thus, an interpretation of atavism for appendages developing on the pedicel in teratological spiders remains valid. Also noteworthy is the observation that, in some chelicerates, walking leg segments (all or just L4), as well as the opisthosomal segments, are derived from the SAZ and, in one known instance (a mite), O1 segmentation precedes that of L4 (reviewed in ). Thus, it seems the mechanism of segmentation during embryonic development does not necessarily provide a reliable means for assigning segments to tagmata in a way that agrees with morphological/functional regions. Summary and future directions By applying alternating temperatures during early spider embryogenesis, we obtained high embryo mortality, changes in number, size, and shape of appendages or their podomeres, and formation of appendages on the pedicel; a body segment (O1 = somite VII) on which appendages are not normally found in spiders. Thus, by using appropriate methods, abnormalities can be induced that potentially reflect certain ancestral traits present in basal (eu)chelicerates, including possibly atavistic appendages on segment O1. This type of developmental abnormality has a bearing on the question of the tagma to which somite VII belongs, prosoma or opisthosoma, with implications tied to chelicerate phylogeny. Based on recent research on genes that determine the formation of segments and appendages, we suspect that at least some of the observed developmental defects arising from our alternating temperature protocol are the result of blocked or otherwise aberrant expression of relevant genes, including Hox genes. Atypical expression may potentially include spatial and temporal, as well as quantitative, deviations from normal. Though the possible involvement of specific genes as discussed above is speculative, it is one step toward the goal of testing hypotheses that attribute specific anomaly types to disturbances affecting specific genes. For example, by identifying hb as a candidate gene that may have its expression distorted by the alternating temperature protocol, potentially resulting in oligomely (as discussed above), the expression of hb over time may be compared between experimental and control embryos to ascertain if the former exhibit notable deviations in expression ( e.g ., asymmetric expression) compared to the latter. Modified versions of the alternating temperature protocol can also be investigated that intentionally attempt to disrupt expression of a specific gene and/or increase defect frequency; for example, by narrowing the window of treatment and exploring the application of an abrupt temperature switch at specific times relative to the height of expression for a given gene and given site(s) within embryos. This could lead to the establishment of a protocol that is able to induce certain types of anomalies with greater regularity, reducing numbers of embryos that would need to be screened for defects.
Molecular embryological research has suggested alterations to normal gene expression that might account for some instances of appendage loss. Parental RNAi (pRNAi) studies, especially in P. tepidariorum , have revealed a range of abnormal phenotypes from knockdown of selected developmental genes , depending on the specific gene suppressed and on the degree of suppression of a given gene within different embryos. These phenotypes can include embryos exhibiting oligomely, though in some instances lethal abnormalities co-occur, indicating that widespread down-regulation of the targeted genes does not account for oligomelic postembryos like those in – . For example, knockdown of the Notch-signaling-pathway component Delta in P. tepidariorum ( Pt-Delta ) or the spider gap gene Pt-Sox21b.1 results in loss of leg-bearing segments, but this is accompanied by loss of all opisthosomal segments. More localized suppression, however, comparable to that achieved by embryonic RNAi (eRNAi) , cannot be ruled out in oligomelic postembryos. As an aside, conspicuously lethal consequences of gene downregulation, as occur with knockdown of genes such as Pt-Delta and Pt-Sox21b.1 , are potentially relevant to the high mortality that was observed in experimental E. atrica embryos. Equally lethal, though less conspicuous, is embryonic development that, superficially, proceeds essentially to completion without obvious defect, but the embryo nevertheless fails to hatch. Embryos like these were among the 40% of the experimental group that did not hatch. It is thus worth noting that fully developed embryos, not exhibiting defects but unable to hatch, were produced with high frequency when three transcription factors, Pt-foxQ2 , Pt-six3.1 , or Pt-six3.2 , were individually suppressed by pRNAi . We hasten to emphasize, however, that there may well be many mechanisms by which this inability to hatch is produced, with lethal consequence, including some unrelated to abnormal gene expression. Here and elsewhere in this discussion, in noting similarities between phenotypes obtained by thermal shock and by RNAi of specific genes, the involvement of those genes in producing the thermally-induced abnormalities, while a possibility, is by no means assured and certainly no such definitive claim is intended. More likely to be involved in appendage losses like those in are genes that, when knocked down, result in oligomelic embryos able to survive hatching. Examples of two such genes, expressed during early embryogenesis within the period our thermal treatment is applied, are the gap gene hunchback ( hb ) and Distal-less ( Dll ), an appendage patterning gene that also plays an earlier gap gene role in spiders . pRNAi of hb in P. tepidariorum ( Pt-hb ) yielded postembryos missing the L2 leg pair or both L1 and L2 legs , while, similarly, pRNAi of Pt-Dll produced postembryos lacking the L1 leg pair or both L1 and L2 legs . These losses reflected loss of the segments on which the legs would have developed and it was only segments bearing walking legs that were so affected , reflecting the distinction between segmentation of the head region, with its chelicerae and pedipalps, and that of the thorax region, with its four pairs of legs . If this also applies to E. atrica , then abnormal suppression of Ea-hb would not contribute to oligomely involving chelicerae and pedipalps ( , , and ), but it could be a factor in spiders with missing legs ( and ). The same can be said for Ea-Dll suppression during its early involvement with prosomal segmentation (its gap gene role) . Later suppression of Dll in limb buds , whether preceded by early Dll suppression (pRNAi; ) or not (eRNAi; ; ), resulted in truncated appendages but not in any additional appendage loss. There are, however, two confounding considerations where potential abnormal Ea-hb or Ea-Dll expression is concerned: (1) Though did note left-right leg reduction asymmetry in Pt-hb pRNAi embryos, leg losses resulting from prosomal segment losses have usually been symmetric , whereas the alternating temperature treatment applied in this study has often yielded asymmetric ( and ), as well as symmetric ( e.g ., ), leg oligomely. (2) We have not been able to determine which legs specifically have been missing in oligomelic postembryos, even after examining leg neuromeres in histological sections , and therefore we do not know if leg losses have been consistent with Ea-hb or early Ea-Dll suppression. Regarding (1), if, speculatively, Ea-hb or Ea-Dll is inhibited by our alternating temperature protocol (at this point possibilities only), such inhibition might be more localized, random, and asymmetric than that often resulting from pRNAi. Indeed, unilaterally oligomelic E. atrica with corresponding unilateral losses of leg nerves and ganglia indicate that thermally-induced disturbances result in losses of hemisegments more often than of full segments . This is reminiscent of asymmetric prosomal appendage shortening that has been induced in C. salei by knockdown of Cs-Dll using eRNAi and of seven-legged postembryos that have occasionally resulted from Pt-Dll pRNAi, indicating loss of a single L1 hemisegment . noted the median furrow (ventral sulcus) that divides the right and left halves of the embryonic germ band , and the seemingly independent development of the two halves, as a possible explanation for such asymmetric phenotypes. Regarding (2), future studies could explore a strategy used by for ascertaining the identity of missing legs: for oligomelic postembryos able to molt successfully to at least 1 st instars, the number and arrangement of slit sense organs on the sternum, compared to control spiders, should help identify the missing legs and provide an alternative to histological sectioning for indicating if symmetric/asymmetric oligomely of legs is accompanied by loss of an entire segment/hemisegment, as previously suggested based on histology . We should also note that, unlike RNAi experiments, in which the gene targeted by treatment is known, genes most directly impacted by application of the alternating temperature protocol may be cofactors, upstream regulators, or downstream targets of genes discussed here as being potentially perturbed by the protocol, rather than directly affecting expression of the candidate gene itself. For example, pRNAi of the transcription factor Sp6-9 in P. tepidariorum ( Pt-Sp6-9 ) has been observed to reduce or eliminate Pt-Dll expression as well as eliminate expression of the segment polarity gene Pt-engrailed-1 ( Pt-en-1 ) in the L1 and L2 segments , similar to the effect of Pt-Dll pRNAi on Pt-en-1 expression . Resulting phenotypes included embryos missing these two segments and, so, also the legs that would form on them . Thus, in this example, defects consistent with inhibited Ea-Dll expression could, hypothetically, arise via thermally-induced direct disruption to a different member of the same gene network, namely Ea-Sp6-9 . Also, genes most directly affected may vary among embryos depending on, e.g ., the exact timing of a temperature switch in relation to an embryo’s stage of development. It is also worth repeating that thermally-induced perturbations to normal gene expression might have abnormal spatial or temporal components in addition to, or rather than, quantitative aberrations. On first consideration, missing pedipalps, as in and , could suggest disturbance to the normal expression of the Hox gene labial ( lab ), specifically the paralog lab-1 ( lab-B in ), first expressed at Stage 4 in P. tepidariorum . Its knockdown by pRNAi can result in postembryos lacking pedipalps, though, unlike leg losses that are due to loss of the corresponding prosomal segments, the pedipalpal segment is retained . On the other hand, like the above pRNAi-induced leg losses, pedipalp loss as seen in Pt-lab-1 pRNAi postembryos has been symmetric , whereas the alternating temperature treatment more often results in asymmetric pedipalp oligomely in E. atrica ( and ), suggesting a potential localized disruption to Ea-lab-1 expression. However, an abnormal postembryo like that shown in , in which the site of a missing pedipalp is adjacent to a greatly reduced chelicera (labeled ‘a’), does not support this suggestion if we assume a shared genetic cause for both anomalies (this assumption is by no means certain). This is because expression of lab-1 (or any of the Hox genes) is not involved in specifying chelicera morphology . An alternative explanation that might encompass both defects has not yet emerged from functional studies in spiders. The gene dachshund-2 is expressed proximally in both chelicerae and pedipalps, but the only noted phenotypic consequences of its knockdown by pRNAi in P. tepidariorum are malformed patellae in the walking legs . Two paralogs of extradenticle ( exd-1 , exd-2 ) and homothorax-1 ( hth-1 ) are also expressed proximally in pedipalps and chelicerae , but exd has not been the subject of RNAi experiments in spiders, or any chelicerates , and among chelicerates hth function has only been examined by eRNAi in the harvestman Phalangium opilio Linnaeus, 1758 . However, studies in insects and spiders indicate that exd-1 and hth-1 of spiders are functionally linked (Hth-1 required for translocation of Exd-1 into the nucleus), such that knockdown of either gene would likely produce similar, though not identical, phenotypes ( ; ; references therein). Phenotypes resulting from knockdown of the single-copy hth in P. opilio ( Po-hth ) included homeotic transformations of chelicerae and pedipalps to leg identities, appendage truncation, and fusions between chelicerae and pedipalps, though, importantly, apparently not pedipalp oligomely (the results do, however, state “The labrum and/or some appendages also failed to form” (among Class I phenotype embryos) without elaboration). Interestingly, like the aforementioned defect asymmetry observed in Cs-Dll eRNAi C. salei embryos , a high incidence of asymmetric defects was also obtained with Po-hth eRNAi P. opilio embryos . As mentioned, asymmetric defects are likewise often obtained by the alternating temperature protocol. These three examples demonstrate that aberrations on one side of the germ band do not necessarily affect the other side and suggest limited, non-global perturbations to gene expression or other developmental processes.
Appendage development relies on differentiation along proximal-distal (P-D), dorsal-ventral (D-V), and anterior-posterior (A-P) axes, the last especially little studied in spiders. Genes involved with establishing these axes may be susceptible to thermally-induced abnormal expression, resulting in limb malformations. For example, a key player in establishing the D-V axis is the gene FoxB , encoding a forkhead box transcription factor that is ventrally expressed within appendages . Its knockdown in P. tepidariorum by pRNAi resulted in greatly reduced hatching success and altered expression of downstream genes that normally show ventral ( wingless ( Pt-wg / Wnt1 ), Pt-H15-2 ), dorsal ( optomotor-blind ( Pt-omb )), and distal ( decapentaplegic ( Pt-dpp )) expression within appendages, resulting in ‘dorsalized’ legs and pedipalps . Such Pt-FoxB pRNAi embryos that were able to hatch successfully and progress to the 1 st stadium exhibited distally crooked legs and pedipalps, comparable to some postembryos included in our ‘Other abnormalities’ category . This category also included postembryos with significantly shortened appendages, a phenotype that has also been observed in mildly affected Pt-Sp6-9 pRNAi embryos and postembryos, and has included asymmetric defects . Appendage bifurcation, i.e ., schistomely ( and ), in postembryos might also be considered in terms of erroneous expression of genes modeling the appendage axes, with schistomely representing distal duplication of the P-D axis . Though functional data ( e.g ., RNAi) are lacking in chelicerates , expression data in P. tepidariorum for dpp and wg / Wnt1 , among other evidence from spiders and other arthropods , have been consistent with dpp and wg/Wnt1 expression early in spider appendage development initiating a gene cascade that generates the P-D axis . In legs and pedipalps, three distinct domains of expression establish the P-D axis via expression of Dll distally, dachshund-1 ( dac-1 ) medially, and exd-1 / hth-1 proximally . Disturbances in the normal expression of dpp , wg / Wnt1 , or their downstream targets caused by thermal shocks may result in a duplication of the P-D axis. In a report of cheliceral schistomely in the spider Tetragnatha versicolor Walckenaer, 1841, hypothesized that the defect could be replicated by introducing ectopic Dpp and Wg/Wnt1. The schistomely shown in , at the distal end of a leg, suggests perturbations that included direct or indirect abnormality in Dll expression while the more proximal schistomely indicated in , on a noticeably wider appendage than the normal legs, potentially represents abnormal expression of dpp , wg / Wnt1 , and dac-1 (among other possibilities), the latter’s expression coincident with the trochanter and femur .
Arguably the most interesting cases from the perspective of evolutionary/developmental biology involve two individuals with an appendage on the pedicel (first segment of the opisthosoma, O1; in spiders, coincident with somite VII) that are presented in and . Appendages do not usually form on the O1 segment in spiders and such defects are rare even among E. atrica subjected to alternating temperatures as embryos. Within this segment, the principal Hox genes expressed are the two paralogs of Antennapedia ( Antp ) . Knockdown of Antp-1 in P. tepidariorum ( Pt-Antp-1 ) by pRNAi has demonstrated that it is responsible for repressing the development of legs on the O1 segment . At its most severe, this down-regulation of Pt-Antp-1 resulted in sufficient de-repression of leg development in O1 that 10 walking legs formed; the usual eight plus a pair on the pedicel that were like the former morphologically and in lateral placement except a little shorter and thinner ( ; replicated by ). Expression of the genes that establish the P-D axis in legs ( Pt-exd-1 , Pt-hth-1 , Pt-dac-1 , Pt-Dll ) was nearly identical between the ectopic O1 legs and normal L1–L4 legs. Moreover, expression of the Hox genes Deformed-A ( Pt-Dfd-A ) and Sex combs reduced-B ( Pt-Scr-B ; paralogs as designated in ) within the 10 legs indicated that the ectopic legs on O1 were not homeotic copies of any of the normal walking legs, but they were instead true O1 segment de-repressed legs . It is of interest that obtained not only severely affected postembryos with a pair of complete legs on the pedicel following knockdown of Pt-Antp-1 , but in more moderately affected individuals they observed only short leg-like projections on the pedicel. Further, in a triple pRNAi experiment (to suppress Pt-Antp-1 and two other Hox genes), they obtained two postembryos with an incomplete appendage on just one side of the pedicel. They attributed this asymmetric (“mosaic”) phenotype to the lesser quantity of each dsRNA that could be injected when attempting to inhibit three genes simultaneously, resulting in less effective suppression of Pt-Antp-1 . This range of outcomes is again reminiscent of the results obtained when alternating temperatures are applied to embryos of E. atrica , where appendages may form on the pedicel symmetrically or only on one side , and these appendages may exhibit little or considerable development, from a short, unsegmented projection to a segmented, essentially complete leg ( , ; ; this study). This suggests that the alternating temperature protocol has the potential to disturb, to varying extent, normal expression of Ea-Antp-1 or associated up- or downstream genes in the O1 segment. There is a long history of embryological observations on spiders that indicates an ancestry in which appendages were present on somite VII ( e.g ., ; ; ; ; ; ). Principally, this is indicated by a small, short-lived protuberance or patch, sometimes explicitly interpreted as an incipient limb bud, appearing on each O1 hemisegment when the opisthosomal limb buds develop. These transient O1 limb buds apparently do not form in all spider taxa , however, as they have not been noted in some detailed embryological studies . It is notable that putative limb buds on O1 have been observed in Heptathela , a member of the basal Mesothelae, as well as in several members of the derived araneomorph RTA clade, to which E. atrica belongs . Considering that small, transitory protrusions (potential appendages) may appear on the pedicel (O1) segment in embryonic spiders, and that by use of targeted gene suppression (pRNAi) it is possible to obtain appendages on the pedicel with the structure of walking legs that nevertheless have their own O1 identity , it might be worth reconsidering whether somite VII, the pedicel, is indeed the first segment of the opisthosoma, as it is usually described, rather than the last segment of the prosoma. This thought is stimulated by another result obtained by ; that limb repression also occurs as a normal part of development in the O2 segment (somite VIII), but when the genes that redundantly promote this repression ( Pt-Antp-1 , Ultrabithorax-1 ( Pt-Ubx-1 )) are suppressed by double pRNAi, the ectopic appendages that form on O2 appear far more vestigial than the legs induced to form on O1. This may reflect less effective overall de-repression in O2 because of the repression redundancy present in O2, not shared by O1, but it could also conceivably reflect an early euchelicerate ancestry in which appendages on somites VII and VIII differed substantially in morphology, with those on VII more limb-like and those on VIII more plate-like, suggestive of a border between tagmata. Such a difference in appendage morphology has been interpreted for the Devonian euchelicerate Weinbergina and is also seen in extant Xiphosurida (horseshoe crabs) . Applying :4) definition of a tagma as “…a distinct and discrete morphological region that comprises a series of equivalently modified appendages that constitute a unit of specific form…or sometimes function…”, the traditional view of the O1 segment as part of the spider opisthosoma seems appropriate. Both the normally legless condition of the pedicel and the maneuverability it imparts to the rest of the opisthosoma suggest a form and function more in keeping with those of the opisthosoma. In addition, during spider embryogenesis, the germ band initially divides into the prosomal segments and a posterior ‘segment addition zone’ (SAZ) from which the opisthosomal segments, including O1, subsequently derive in anterior-to-posterior sequence . These differing paths to segmentation in the two tagmata also favor an opisthosomal identity for the O1 segment. On the other hand, and acknowledge that establishing borders between tagmata can be difficult because the ends of a tagma and their associated appendages may differ substantially from the rest of the tagma. The border between prosoma and opisthosoma, with somite VII’s questionable affiliation, is given as a prime problematic example . They review evidence from fossil and extant chelicerates that supports a chelicerate groundplan in which somite VII is prosomal, as suggested by . This possibility is further supported by the potential for appendages with leg-like morphology to develop on the spider pedicel, whether induced by application of pRNAi or alternating temperatures, and, along with transitory limb bud formation on the O1 segment in some spiders, suggests loss of somite VII appendages present in basal euchelicerate ancestors of arachnids . Thus, an interpretation of atavism for appendages developing on the pedicel in teratological spiders remains valid. Also noteworthy is the observation that, in some chelicerates, walking leg segments (all or just L4), as well as the opisthosomal segments, are derived from the SAZ and, in one known instance (a mite), O1 segmentation precedes that of L4 (reviewed in ). Thus, it seems the mechanism of segmentation during embryonic development does not necessarily provide a reliable means for assigning segments to tagmata in a way that agrees with morphological/functional regions.
By applying alternating temperatures during early spider embryogenesis, we obtained high embryo mortality, changes in number, size, and shape of appendages or their podomeres, and formation of appendages on the pedicel; a body segment (O1 = somite VII) on which appendages are not normally found in spiders. Thus, by using appropriate methods, abnormalities can be induced that potentially reflect certain ancestral traits present in basal (eu)chelicerates, including possibly atavistic appendages on segment O1. This type of developmental abnormality has a bearing on the question of the tagma to which somite VII belongs, prosoma or opisthosoma, with implications tied to chelicerate phylogeny. Based on recent research on genes that determine the formation of segments and appendages, we suspect that at least some of the observed developmental defects arising from our alternating temperature protocol are the result of blocked or otherwise aberrant expression of relevant genes, including Hox genes. Atypical expression may potentially include spatial and temporal, as well as quantitative, deviations from normal. Though the possible involvement of specific genes as discussed above is speculative, it is one step toward the goal of testing hypotheses that attribute specific anomaly types to disturbances affecting specific genes. For example, by identifying hb as a candidate gene that may have its expression distorted by the alternating temperature protocol, potentially resulting in oligomely (as discussed above), the expression of hb over time may be compared between experimental and control embryos to ascertain if the former exhibit notable deviations in expression ( e.g ., asymmetric expression) compared to the latter. Modified versions of the alternating temperature protocol can also be investigated that intentionally attempt to disrupt expression of a specific gene and/or increase defect frequency; for example, by narrowing the window of treatment and exploring the application of an abrupt temperature switch at specific times relative to the height of expression for a given gene and given site(s) within embryos. This could lead to the establishment of a protocol that is able to induce certain types of anomalies with greater regularity, reducing numbers of embryos that would need to be screened for defects.
|
Human skeletal muscle fiber heterogeneity beyond myosin heavy chains | bd66304e-4c3a-4752-bf42-f582531dca15 | 11839989 | Biochemistry[mh] | Cellular heterogeneity is an inherent feature of all biological systems, allowing for cellular specialization to meet the diverse demands imposed upon tissues and cells . The classic view of skeletal muscle fiber heterogeneity is that motoneurons define the typology of the fibers within a motor unit, with fiber types (i.e., type 1, type 2A, and type 2X in humans) being defined by myosin heavy chain isoform (MYH) characteristics . This was first based on its ATPase pH lability , and later by its molecular MYH expression . However, skeletal muscle fibers are increasingly being viewed along a continuum, as opposed to discrete fiber types , fueled by the identification and subsequent acceptance of the existence of “hybrid” fibers that express varying proportions of multiple MYHs simultaneously. Nonetheless, the field still relies heavily on MYHs as primary classifiers of muscle fibers, a perspective that might be limited and heavily biased by early studies in rodents, displaying a different MYH expression profile and thus fiber type range than humans . The picture is further complicated by different skeletal muscles displaying specialized fiber type profiles within humans . The vastus lateralis is a mixed muscle type with average (and therefore representative) MYH expression profiles . Together with this, its accessibility for sampling makes it the most commonly studied muscle in humans. An unbiased exploration of skeletal muscle fiber diversity with powerful omics tools is thus heavily warranted, but challenging, in part owing to the multinucleated nature of skeletal muscle fibers. Nonetheless, both transcriptomics , and proteomics technologies have recently experienced a sensitivity revolution due to various technological advances, making profiling of skeletal muscle at a single fiber resolution now possible. As a result, notable progress has already been made in describing the single muscle fiber diversity and responses to atrophic stimuli and aging – . Importantly, these technological advances lend themselves to being advantageous in the clinical setting, helping to describe in greater detail and precision the dysregulation associated with disease. For example, the underlying pathophysiology of nemaline myopathy, one of the most prevalent genetic muscle disorders (MIM 605355 and MIM 161800), is complex and convoluted , , thus a deeper characterization of skeletal muscle fiber dysregulation may spur significant developments in our understanding of the disease. Our methodological development of transcriptome and proteome profiling of single skeletal muscle fibers manually isolated from human biopsy specimens, and their application to over a thousand fibers each, enables us to investigate the cellular heterogeneity of human skeletal muscle fibers. In doing so, we demonstrate the power of muscle fiber phenotyping at the transcriptomic and proteomic level, identifying that metabolic, ribosomal, and cell junction proteins are important sources of variation among muscle fibers. Furthermore, with our proteomics workflow we characterize the clinical implications of nemaline myopathies within single skeletal muscle fibers, revealing a coordinated shift towards non-oxidative fibers independently of MYH-based fiber type. Development of high-sensitivity and high-throughput single muscle fiber transcriptomic and proteomic pipelines To investigate the heterogeneity of human skeletal muscle fibers, we developed two workflows to enable transcriptome and proteome profiling of single skeletal muscle fibers (Figs. and Supplementary fig. ). Several methodological steps were developed and optimized, from sample storage and preservation of RNA and protein integrity to optimizing the throughput of each method. This was achieved for transcriptome analysis by inserting sample-specific molecular barcodes during the initial reverse transcription step, enabling pooling of 96 fibers for further efficient downstream processing. Rich transcriptome data was further obtained by deeper sequencing ( ± 1 M reads per fiber) compared to conventional single cell methods . For proteomics, we used a short chromatographic gradient (21 minutes) combined with DIA-PASEF data aquisition on timsTOF mass spectrometer to optimize proteome depth, whilst maintaining a high-throughput , . To investigate skeletal muscle fiber heterogeneity in the healthy state, the transcriptome was determined for 1050 individual fibers from 14 healthy adult donors, whilst the proteome was determined for 1038 fibers from 5 healthy adult donors (Supplementary Table ). These datasets will be referred to as the 1000 fiber transcriptome and proteome datasets, respectively, throughout this manuscript. Our approach detected a total of 27237 transcripts and 2983 proteins in the 1000 fiber transcriptomics and proteomics studies (Figs. , Supplementary Dataset – ). After filtering for > 1000 detected genes and for 50% valid values within each fiber in both transcriptomics and proteomics datasets, respectively, downstream bioinformatic analyses were performed on 925 and 974 fibers in the transcriptome and proteome, respectively. On average 4257 ± 1557 genes and 2015 ± 234 proteins (mean ± SD) were detected per fiber after filtering, with limited inter-individual variation (Supplementary figs. , Supplementary Dataset - ). The intra-individual variation within a participant was more substantial however, most likely due to differences in RNA/protein yield among fibers of different length and cross-sectional area. For the majority of proteins ( > 2000), the coefficient of variation was below 20% (Supplementary fig. ). Both methodologies captured a wide dynamic range of transcripts and proteins, with features known to be important for muscle contraction being highly expressed (e.g., ACTA1, MYH2, MYH7, TNNT1, TNNT3) (Supplementary figs. ). A large part of the detected features were shared between the transcriptome and proteome datasets (Supplementary fig. ), alongside a reasonable correlation ( r = 0.52) in average UMI counts/LFQ intensities for these features (Supplementary fig. ). Type 2X is not a distinct fiber type We initially set out to define the MYH-based fiber type of each fiber using an optimized methodology, leveraging the high sensitivity and dynamic range of MYH expression in the omics datasets. Previous studies have used arbitrary cut-offs to assign a fiber as pure type 1, type 2A, type 2X, or hybrid, based on a fixed percentage of expression for the different MYHs , , . We employed a different approach, in which we ranked the expression of each fiber by the MYHs used for fiber typing: MYH7, MYH2 and MYH1, corresponding to type 1, type 2A and type 2X fibers, respectively. We then mathematically calculated the bottom knee for each of the resulting curves and used it as a threshold to assign a fiber as being positive (above threshold) or negative (below threshold) for each MYH (Figs. ). These data show that MYH7 (Fig. ) and MYH2 (Fig. ) have a more pronounced on/off expression profile on the RNA level, compared to the protein level. Indeed, at the protein level, very few fibers did not express MYH7 and no fibers had 100% MYH2 expression. Next, we used the determined expression thresholds to assign MYH-based fiber types for all fibers in each dataset. For example, a MYH7 + /MYH2 - /MYH1 - was assigned as type 1, and a MYH7 - /MYH2 + /MYH1 + was assigned as a hybrid type 2A/2X fiber (see Supplementary Table for full description). When combining all fibers, a very similar MYH-based fiber type distribution was observed at the RNA (Fig. ) and protein (Fig. ) levels, with an expected inter-individual variation in relative MYH-based fiber type composition (Supplementary fig. ). Most fibers were considered pure type 1 (34–35%) or type 2A (36–38%), although a substantial number of hybrid 2A/2X fibers (16–19%) were also detected. A striking discrepancy was that pure type 2X fibers could only be detected at the RNA but not at the protein level, indicating that fast MYH expression is, at least in part, post-transcriptionally regulated. We validated the MYH-based fiber typing from the proteomics data against an antibody-based dot blot technique, in which both methodologies were in 100% agreement on the identification of both pure type 1 and type 2A fibers (Supplementary fig. ). However, the increased sensitivity of the proteomics-based approach was superior in identifying hybrid fibers and in quantifying the proportion of each MYH within each fiber. These data demonstrate the efficacy of using an unbiased high-sensitivity omics-based approach for the characterization of skeletal muscle fiber types. We then utilized the full depth of information that transcriptomics and proteomics provide to classify fibers in an unbiased manner based on their whole transcriptome or proteome. Using uniform manifold approximation and projection (UMAP) for dimension reduction of 6 principal components (Supplementary figs. ), we were able to visualize the variation in fibers in the transcriptome (Fig. ) and proteome (Fig. ). Interestingly, fibers did not cluster by participant in either the transcriptomics or proteomics datasets (Supplementary figs. ), nor by test day (Supplementary fig. ), indicating that intra-individual variance in skeletal muscle fibers outweighs inter-individual variance. Two distinct clusters were apparent in the UMAP plots, which represented “fast” and “slow” fibers (Fig. ). MYH7 + (slow) fibers clustered to the positive side of UMAP1, and MYH2 + and MYH1 + (fast) fibers clustered to the negative side of UMAP1 (Fig. ). No distinction between the various fast MYH-based fiber types (i.e., type 2A, type 2X, or hybrid 2A/2X) could be identified, however, suggesting that when the whole transcriptome or proteome is taken into account, the expression of MYH1 (Fig. ), or other classical markers of type 2X fibers like ACTN3 or MYLK2 (Supplementary figs. ), does not discriminate between distinct fiber types. Furthermore, in contrast to MYH2 and MYH7, few transcripts or proteins positively correlate with MYH1 (Supplementary figs. ), suggesting that MYH1 abundance does not adequately reflect the muscle fiber transcriptome/proteome. Similar conclusions can be drawn when assessing the blended expression of the three MYH isoforms on a UMAP level (Supplementary figs. ). Thus, whilst type 2X fibers can be identified at the transcriptional level based solely on the quantification of MYHs, MYH1 + fibers are not distinct from other fast fibers when the whole transcriptome or proteome is considered. Considerable skeletal muscle fiber heterogeneity beyond myosin heavy chain isoforms As an initial exploration of fiber heterogeneity beyond MYHs, we assessed four established slow fiber type-specific proteins: TPM3, TNNT1, MYL3, and ATP2A2 . In both transcriptomics (Supplementary fig. ) and proteomics (Supplementary fig. ) approaches, the slow isoforms exhibited a high, although not perfect, Pearson correlation coefficient with MYH7. ~25 and 33% of the fibers in the transcriptomics (Supplementary fig. ) and proteomics (Supplementary fig. ) approaches, respectively, were not classified as pure slow fibers by all gene/protein isoforms. Thus, fiber typing based on multiple gene/protein isoforms introduces additional complexity, even with well-known proteins that are assumed to be fiber type-specific. This suggests that fiber typing based on isoforms of a single family of genes/proteins is likely inadequate to capture the true heterogeneity of skeletal muscle fibers. To further investigate the omics-wide phenotypical variability between human skeletal muscle fibers, we applied unbiased dimensionality reduction by principal component analysis (PCA) to our data (Fig. ). Similarly to the UMAP plot, neither participant nor test day influenced the clustering of fibers at the PCA level (Supplementary figs. ). MYH-based fiber type was explained by PC2 in both datasets, displaying clusters for slow type 1 fibers and a second cluster containing the fast type 2A, type 2X and hybrid type 2A/2X fibers (Fig. ). These two clusters were bridged in both datasets by a small number of hybrid type 1/2A fibers. As expected, over-representation analysis of the top PC drivers confirmed that PC2 is driven by contractile and metabolic features (Fig. & Supplementary fig. , Supplementary Dataset - ). In general, the MYH-based fiber types adequately explain the continual variance along PC2, except for the so-called type 2X fibers, which were spread across the entirety of the transcriptomic fast cluster. Unexpectedly, MYH-based fiber type explained only the second greatest degree of variability (PC2), indicating that other biological factors (PC1) independent of MYH-based fiber type, have a substantial role in regulating skeletal muscle fiber heterogeneity. Over-representation analysis of the top drivers in PC1 indicated that variance within PC1 was determined primarily by cell-cell adhesion and ribosomal content in the transcriptome, and costamere and ribosomal proteins in the proteome (Figs. & Supplementary fig. , Supplementary Dataset ). In skeletal muscle, the costamere connects the Z-disk to the sarcolemma and participates in force transmission and signaling . Annotating PCA plots with cell-cell adhesion (transcriptome, Fig. ) and costamere (proteome, Fig. ) features showed a strong shift to the left side of PC1, suggesting an enrichment for these features in some fibers. Closer inspection of fiber clustering at the UMAP level showed a MYH-based fiber type-independent gradient of expression for most features, rather than distinct subclusters of muscle fibers. This continuum holds true for several genes related to pathological conditions (Fig. ), such as CHCHD10 (neuromuscular disorders) , SLIT3 (muscle loss) and CTDNEP1 (muscle disease) . This same continuum was observed in the proteome, with proteins related to neurological diseases (UGDH) , insulin signaling (PHIP) and transcription (HIST1H2AB) (Fig. ). These data collectively show that slow/fast fiber type independent heterogeneity across fibers is of a continual nature. Fiber heterogeneity is post-transcriptionally regulated by ribosomal heterogeneity Interestingly, drivers of PC2 showed a good correlation ( r = 0.663) between the transcriptome and proteome (Fig. ), indicating that the slow and fast fiber types, particularly the contractile and the metabolic profile of skeletal muscle fibers, are transcriptionally regulated. However, there was no correlation between the drivers of PC1 in the transcriptome and the proteome (r = -0.027) (Fig. ). This suggests that slow/fast fiber type-independent variance is for a large part post-transcriptionally regulated. Since PC1 variance was largely explained by ribosomal gene ontology terms and given that ribosomes play a profound and specialized role in the cell by actively participating and influencing protein translation , we next set out to explore this unexpected ribosomal heterogeneity. We first colored the proteomics PCA plot based on the relative abundance of the proteins within the “cytosolic ribosome” GOCC term (Fig. ). Although the term was enriched on the positive side of PC1, and a slight gradient could be observed accordingly, ribosomal proteins were driving separation in both directions of PC1 (Fig. ). Amongst the ribosomal proteins enriched in the negative direction of PC1 were RPL18, RPS18 and RPS13 (Fig. ), whilst RPL31, RPL35 and RPL38 (Fig. ) were major drivers in the positive direction of PC1. Interestingly, RPL38 and RPS13 are highly expressed in skeletal muscle when compared to other tissues (Supplementary fig. ). These distinct ribosomal signatures across PC1 could not be observed in the transcriptome (Supplementary fig. ), indicating that this phenomenon is post-transcriptionally regulated. The concept of ribosomal heterogeneity and specialization was previously introduced, where the presence of distinct subpopulations of ribosomes (ribosomal heterogeneity) can directly influence protein translation in different tissues and cells by selectively translating specific mRNA transcript pools (ribosomal specialization). To identify sub-sets of ribosomal proteins that are co-expressed within skeletal muscle fibers, we performed an unsupervised hierarchical clustering analysis of ribosomal proteins within the proteome (Figs. , Supplementary Dataset ). As expected, ribosomal proteins did not cluster by MYH-based fiber type. However, we identified three distinct clusters of ribosomal proteins; the first of which (ribosomal_cluster_1) was coregulated alongside RPL38, therefore elevated in fibers in the positive direction of PC1. The second cluster (ribosomal_cluster_2) was coregulated alongside RPS13 and was elevated in fibers in the negative direction of PC1. A third cluster (ribosomal_cluster_3) displayed no coordinated differential expression within skeletal muscle fibers and could therefore be considered “core” ribosomal proteins within skeletal muscle. Both ribosomal_cluster_1 and ribosomal_cluster_2 contain ribosomal proteins previously demonstrated to regulate selective translation (e.g., RPL10A, RPL38, RPS19 and RPS25) and functionally influence development (e.g., RPL10A, RPL38) – . In line with the results from the PCA analysis, the observed heterogeneous abundance of these ribosomal proteins across fibers was also of a continual nature (Supplementary fig. ). To visualize the position of the ribosomal proteins that display heterogeneity within the ribosome, we utilized a structural model of the human 80S ribosome (Protein Data Bank: 4V6X) (Fig. ). Upon highlighting the ribosomal proteins that belong to the different ribosomal clusters, they were not located in close proximity, showing that our approach did not enrich for a particular area/section of the ribosome. Yet interestingly, ribosomal_cluster_2 contained a lower proportion of large ribosomal subunit proteins than ribosomal_cluster_1 and ribosomal_cluster_3 (Supplementary fig. ). We observed that the majority of the proteins that display variable stoichiometry within skeletal muscle fibers are located on the surface of the ribosome (Fig. ), which is consistent with an ability to interact with internal ribosome entry site (IRES) elements within distinct mRNA populations to coordinate selective translation , . Furthermore, numerous proteins that display variable stoichiometry within skeletal muscle fibers are located close to functional regions, such as the mRNA exit tunnel (Fig. ), which can selectively regulate translation elongation and arrest of specific nascent peptides . Overall, our data identifies that skeletal muscle ribosomal protein stoichiometry displays heterogeneity, driving variance between skeletal muscle fibers. Slow and fast fiber signatures and their transcriptional regulators We next set out to identify features of fast and slow skeletal muscle fibers and how these are transcriptionally regulated. Comparing the fast and slow clusters defined in the UMAPs of both datasets (Figs. & inlays Fig. ), transcriptome and proteome analysis yielded 1366 and 804 differentially abundant features, respectively (Figs. , Supplementary Dataset – ). Expected differences in sarcomeric (e.g., tropomyosin and troponin), excitation-contraction coupling (SERCA isoforms) and energy metabolism-related (e.g., ALDOA and CKB) features were observed. In addition, transcripts and proteins regulating protein ubiquitination displayed differences between fast and slow fibers (e.g., USP54, SH3RF2 , USP28 and USP48) (Fig. ). Furthermore, the microprotein-encoding gene RP11-451G4.2 (DWORF) , which has previously been shown to be differentially expressed between lamb muscle fiber types and to enhance SERCA activity in cardiac muscle , was significantly up-regulated in slow skeletal muscle fibers (Fig. ). Also at the single fiber level, clear differences could be observed for known features such as the metabolism-related isoforms of lactate dehydrogenase (LDHA and LDHB, Figs. and Supplementary fig. ) , , as well as previously unknown fiber type-specific features (e.g., IRX3 , USP54 , USP28, and DPYSL3) (Fig. ). There was a reasonable overlap of differentially expressed features (Supplementary fig. ) and correlation of fold changes between the transcriptomic and proteomic datasets, primarily driven by the large expression differences in sarcomeric features (Supplementary fig. ). Notably, some features (e.g., USP28, USP48, GOLGA4, AKAP13) showed strong post-transcriptional regulation with slow/fast fiber type-specific expression profiles only at the proteome level (Supplementary fig. ). We next performed over-representation analysis on the differentially abundant genes and proteins (Figs. , Supplementary Dataset ). Enriched pathways of features that were differential in both datasets showed expected differences such as fatty acid beta-oxidation and ketone metabolic process (slow fibers), myofilament/muscle contraction (fast and slow fibers, respectively) and carbohydrate catabolic process (fast fibers). Serine/Threonine protein phosphatase activity was also enriched in fast fibers, driven by features such as phosphatase regulatory and catalytic subunits (PPP3CB, PPP1R3D, and PPP1R3A), known to regulate glycogen metabolism (Supplementary figs. ). Other enriched pathways in fast fibers were Processing (P-) bodies (YTHDF3, TRIM21, LSM2) in the proteome (Supplementary fig. ), possibly related to post-transcriptional regulation , and transcription factor activity ( SREBF1 , RXRG , RORA ) in the transcriptome (Supplementary fig. ). Slow fibers showed enrichment for oxidoreductase activity ( BDH1 , DCXR , TXN2 ) (Supplementary fig. ), amide binding ( CPTP , PFDN2 , CRYAB ) (Supplementary fig. ), extracellular matrix ( CTSD , ADAMTSL4 , LAMC1 ) (Supplementary fig. ) and receptor-ligand activity ( FNDC5 , SPX , NENF ) (Supplementary fig. ). To get more insights into the transcriptional regulation of slow/fast fiber type signatures, we performed transcription factor enrichment analysis using SCENIC (Supplementary Dataset ). Many transcription factors were significantly enriched between fast and slow fibers (Fig. ). This included transcription factors such as MAFA , previously linked to the development of fast fibers , but also multiple transcription factors not previously linked to the fiber type specific gene program. These included PITX1 , EGR1 and MYF6 as the most enriched transcription factors within fast fibers (Fig. ). Conversely, ZSCAN30 and EPAS1 (also known as HIF2A ) were the most enriched transcription factors within slow fibers (Fig. ). In line with this, MAFA expression levels were higher in the UMAP area corresponding to fast muscle fibers, whereas an opposite expression pattern was observed for EPAS1 (Fig. ). Non-coding RNA and associated microproteins in human skeletal muscle fibers Alongside known protein-coding genes, there exists a multitude of non-coding RNA biotypes, potentially involved in the regulation of human development and disease , . Several non-coding RNAs displayed fiber type specificity in the transcriptomics dataset (Figs. & Supplementary Dataset ), including LINC01405 , which is very specific to slow fibers and is reported to be downregulated in muscle of mitochondrial myopathy patients . Conversely, RP11-255P5.3 , corresponding to the lnc-ERCC5-5 gene ( https://lncipedia.org/db/transcript/lnc-ERCC5-5:2 ) displayed fast fiber type-specificity. Both LINC01405 ( https://tinyurl.com/x5k9wj3h ) and RP11-255P5.3 ( https://tinyurl.com/29jmzder ) display specificity to skeletal muscle (Supplementary figs. ) and have very few or no known contractile genes within a 1 Mb genomic neighborhood, suggesting a specialized role in fiber type regulation rather than a regulatory role for a neighboring contractile gene. The slow/fast fiber type-specific specific expression profiles of LINC01405 and RP11-255P5.3 were independently validated using RNAscope (Fig. ). Recently, it has become apparent that numerous assumed non-coding transcripts code for transcribed microproteins, some of which regulate muscle function , . To identify microproteins with potential fiber type specificity, we searched our 1000 fiber proteomic dataset using a custom FASTA file containing sequences from the detected non-coding transcripts ( n = 305) from the 1000 fiber transcriptome dataset (Fig. ). This resulted in the identification of 197 microproteins arising from 22 distinct transcripts, of which 71 microproteins display differential regulation between slow and fast skeletal muscle fibers (Supplementary figs. and Supplementary Dataset ). Three microprotein products were identified for LINC01405 , of which one displays a slow-fiber type specificity similar to its transcript (Figs. and Supplementary fig. ). Thus, we identify LINC01405 as a microprotein-encoding gene, which displays specificity for slow skeletal muscle fibers. Nemaline myopathies induce a shift towards faster, less oxidative skeletal muscle fibers Having developed a comprehensive workflow to characterize the proteome of single muscle fibers at scale and discovered regulators of fiber heterogeneity in the healthy state, we applied this pipeline to understand how nemaline myopathy impacts skeletal muscle fiber heterogeneity. Nemaline myopathy is a genetic muscular disorder that causes muscle weakness, resulting in a range of complications for the affected children including respiratory difficulties, scoliosis, and physical immobility , . Typically, in nemaline myopathy, pathogenic variants in genes such as actin alpha 1 ( ACTA1 ) drive the fiber type composition towards slow fiber predominance, although this effect is heterogenous. The only clear exception is the troponin T1 ( TNNT1 ) nemaline myopathy, in which a fast fiber predominance is seen. Thus, a deeper characterization of the heterogeneity behind the skeletal muscle fiber dysregulation observed in nemaline myopathies may help untangle the complex relationship between these diseases and muscle fiber types. Muscle fibers isolated from patients with ACTA1 - and TNNT1 -mutation derived nemaline myopathies display substantial myofiber atrophy or hypotrophy, compared to healthy controls ( n = 3 per group) (Figs. , Supplementary Table ), which provides considerable technical challenge arising from the limited material for proteomic analysis. Nonetheless, we were able to detect 2485 proteins from 272 skeletal muscle fibers. After filtering for a minimum of 1000 quantified proteins per fiber, downstream bioinformatic analyses were performed on 250 fibers. On average of 1573 ± 359 proteins were quantified per fiber after filtering (Supplementary figs. , Supplementary Dataset – ). Importantly, only a modest reduction in proteome depth was apparent in samples from patients with nemaline myopathy, despite the markedly reduced fiber size. Furthermore, processing of this data with our custom FASTA file (including non-coding transcripts) identified five microproteins within the skeletal muscle fibers from nemaline myopathy patients (Supplementary Dataset ). A wide dynamic range was apparent in the proteome, whilst the shared proteins in the control participants correlated well with those in the previous analysis of the 1000 fiber proteome study (Supplementary fig. ). As nemaline myopathies influence the MYH-based fiber type proportions within skeletal muscle , , we first investigated the MYH-based fiber type of our nemaline myopathy patients and controls. Fiber type was determined using the unbiased approach described previously for the 1000 fiber studies (Supplementary figs. ) and once again pure type 2X fibers could not be identified (Fig. ). We observed a heterogeneous effect of nemaline myopathies on fiber type, as two patients with ACTA1 mutations displaying an increased proportion of type 1 fibers, whilst two patients with TNNT1 -nemaline myopathy displayed a reduced proportion of type 1 fibers (Fig. ). Indeed, MYH2 and the fast troponin isoforms were downregulated in ACTA1 -nemaline myopathy (TNNC2, TNNI2, and TNNT3), whilst MYH7 was downregulated in TNNT1 -nemaline myopathy (Supplementary fig. ). This is in line with previous reports of heterogeneous fiber type switching in nemaline myopathies , . We validated these findings using immunohistochemistry, finding type 1 fiber predominance in ACTA1 -nemaline myopathy patients, while the opposite was observed in TNNT1-nemaline myopathy patients (Fig. ). At the single-fiber proteome level, skeletal muscle fibers from ACTA1 - and TNNT1 -nemaline myopathy patients clustered away from the majority of control fibers, with TNNT1- nemaline myopathy fibers tending to be the most severely affected (Fig. ). This was particularly apparent when we produced a PCA plot of pseudo-bulked fibers for each patient, with TNNT1 -nemaline myopathy patients 2 & 3 lying furthest from the control samples (Supplementary figs. , Supplementary Dataset ). To further understand how fibers from myopathy patients compare to the healthy condition, we capitalized on the depth of information provided by our analysis in 1000 fiber proteome study from healthy adult participants. We projected the fibers from our myopathy data set (both ACTA1 - and TNNT1 -nemaline myopathy patients and controls) onto the PCA determined from the 1000 fiber proteome study (Fig. ). Control fibers displayed a similar distribution of MYH-based fiber type along PC2 as the 1000 fiber proteome study. However, the majority of fibers from nemaline myopathy patients shifted downwards along PC2, overlapping with the healthy fast fibers, irrespective of their own MYH-based fiber type. Thus, despite evidence for a fiber type shift towards type 1 fibers in ACTA1 -nemaline myopathy patients when quantified using MYH-based approaches, both ACTA1 - and TNNT1 -nemaline myopathies shift the skeletal muscle fiber proteome to resemble fast fibers. We next directly compared each patient group with the samples from healthy controls, resulting in 256 and 552 differentially abundant proteins in ACTA1 - and TNNT1 -nemaline myopathies, respectively (Figs. & Supplementary figs. , Supplementary Dataset ). Gene set enrichment analysis identified the coordinated reduction of mitochondrial proteins (Figs. , Supplementary Dataset ). Surprisingly, this was completely independent of MYH-based fiber type (Figs. & Supplementary fig. , Supplementary Dataset ), despite the divergent fiber type predominance between ACTA1 - and TNNT1 -nemaline myopathies. Three microproteins were also regulated by either ACTA1 - or TNNT1 -nemaline myopathies. Two of those microproteins, ENSG00000215483_TR14_ORF67 (also known as LINC00598 or Lnc-FOXO1 ) and ENSG00000229425_TR25_ORF40 ( lnc-NRIP1-2 ) displayed differential abundance only within type 1 fibers, with ENSG00000215483_TR14_ORF67 having previously reported to play a role in cell cycle regulation . On the other hand, ENSG00000232046_TR1_ORF437 (corresponding to LINC01798 ) was upregulated in both type 1 and type 2A fibers from ACTA1- nemaline myopathy when compared to healthy controls (Supplementary figs. , Supplementary Dataset ). Conversely, ribosomal proteins were largely unaffected by nemaline myopathies, although RPS17 was downregulated in ACTA1- nemaline myopathy (Fig. ). Enrichment analysis also identified the up-regulation of immune system process in both ACTA1 - and TNNT1 -nemaline myopathies, as well as cell adhesion in TNNT1 -nemaline myopathy (Fig. ). The enrichment of these extracellular terms, was reflected by extracellular matrix proteins driving the PCA in the negative direction in both PC1 and PC2, i.e., towards the most affected fibers (Fig. ). Both patient groups overexpressed extracellular proteins involved in immune response and the sarcolemma repair machinery, such as annexins (ANXA1, ANXA2, ANXA5) , and their interactor S100A11 (Supplementary figs. ). This process has been previously reported to be upregulated in muscle dystrophies , yet to our knowledge it has not been associated with nemaline myopathies before. A normal function of this molecular machinery is required for both sarcolemma membrane repair upon injury and for the fusion of new myoblasts to muscle fibers , . Therefore, an up-regulation of this process in both patient groups suggests a reparative response to damage caused by myofibril instability. The effects of each nemaline myopathy were well correlated (r = 0.736) and displayed reasonable overlap (Supplementary fig. ), indicating that ACTA1 - and TNNT1 -nemaline myopathies induce similar effects on the proteome. Nonetheless, a number of proteins displayed regulation in only ACTA1 - or TNNT1 -nemaline myopathies (Supplementary fig. ). MFAP4, a pro-fibrotic protein, was one of the most up-regulated proteins in TNNT1 -nemaline myopathy whilst unchanged in ACTA1 -nemaline myopathy. SKIC8, a component of the PAF1C complex, which regulates the transcription of HOX genes , was down-regulated in TNNT1 -nemaline myopathy but was unaffected in ACTA1 -nemaline myopathy (Supplementary fig. ). Direct comparisons between ACTA1 - and TNNT1 -nemaline myopathies identify a greater effect of TNNT1 -nemaline myopathies on the reduction of mitochondrial proteins and the increase of immune system proteins (Fig. & Supplementary figs. & Supplementary fig. ). These data are consistent with the greater degree of atrophy/hypotrophy apparent in TNNT1 - compared to ACTA1 -nemaline myopathies (Fig. ), indicating that TNNT1 -nemaline myopathy is the more severe form of the disease. In order to assess whether the observed effects of nemaline myopathy were also present at the whole muscle-level, we conducted a bulk proteome analysis on muscle biopsies from the same TNNT1- nemaline myopathy patients and compared them against control individuals ( n = 3 per group) (Supplementary fig. , Supplementary Dataset ). As expected, upon PCA, control individuals tightly clustered together whereas, similarly to the single fiber analysis, and TNNT1- nemaline myopathy patients displayed a higher inter-sample variance (Supplementary fig. ). The bulk analysis was able to recapitulate the differentially expressed proteins (Supplementary figs. , Supplementary Dataset ) and biological processes (Supplementary fig. , Supplementary Dataset ) highlighted in the single fiber comparison, although lost the potential to discriminate between fiber types, and did not account for the heterogeneous effect of the disease across different fibers. Together, these data demonstrate that single muscle fiber proteomics can illuminate clinical biology that is unapparent using targeted approaches such as immunoblotting. Furthermore, these data highlight the limitations of solely relying upon MYH-based fiber typing to describe phenotypic adaptations. Indeed, despite divergent MYH-based fiber type switching in actin and troponin nemaline myopathies, both nemaline myopathies uncouple the MYH-based fiber type from skeletal muscle fiber metabolism, shifting towards a faster, less oxidative muscle proteome. To investigate the heterogeneity of human skeletal muscle fibers, we developed two workflows to enable transcriptome and proteome profiling of single skeletal muscle fibers (Figs. and Supplementary fig. ). Several methodological steps were developed and optimized, from sample storage and preservation of RNA and protein integrity to optimizing the throughput of each method. This was achieved for transcriptome analysis by inserting sample-specific molecular barcodes during the initial reverse transcription step, enabling pooling of 96 fibers for further efficient downstream processing. Rich transcriptome data was further obtained by deeper sequencing ( ± 1 M reads per fiber) compared to conventional single cell methods . For proteomics, we used a short chromatographic gradient (21 minutes) combined with DIA-PASEF data aquisition on timsTOF mass spectrometer to optimize proteome depth, whilst maintaining a high-throughput , . To investigate skeletal muscle fiber heterogeneity in the healthy state, the transcriptome was determined for 1050 individual fibers from 14 healthy adult donors, whilst the proteome was determined for 1038 fibers from 5 healthy adult donors (Supplementary Table ). These datasets will be referred to as the 1000 fiber transcriptome and proteome datasets, respectively, throughout this manuscript. Our approach detected a total of 27237 transcripts and 2983 proteins in the 1000 fiber transcriptomics and proteomics studies (Figs. , Supplementary Dataset – ). After filtering for > 1000 detected genes and for 50% valid values within each fiber in both transcriptomics and proteomics datasets, respectively, downstream bioinformatic analyses were performed on 925 and 974 fibers in the transcriptome and proteome, respectively. On average 4257 ± 1557 genes and 2015 ± 234 proteins (mean ± SD) were detected per fiber after filtering, with limited inter-individual variation (Supplementary figs. , Supplementary Dataset - ). The intra-individual variation within a participant was more substantial however, most likely due to differences in RNA/protein yield among fibers of different length and cross-sectional area. For the majority of proteins ( > 2000), the coefficient of variation was below 20% (Supplementary fig. ). Both methodologies captured a wide dynamic range of transcripts and proteins, with features known to be important for muscle contraction being highly expressed (e.g., ACTA1, MYH2, MYH7, TNNT1, TNNT3) (Supplementary figs. ). A large part of the detected features were shared between the transcriptome and proteome datasets (Supplementary fig. ), alongside a reasonable correlation ( r = 0.52) in average UMI counts/LFQ intensities for these features (Supplementary fig. ). We initially set out to define the MYH-based fiber type of each fiber using an optimized methodology, leveraging the high sensitivity and dynamic range of MYH expression in the omics datasets. Previous studies have used arbitrary cut-offs to assign a fiber as pure type 1, type 2A, type 2X, or hybrid, based on a fixed percentage of expression for the different MYHs , , . We employed a different approach, in which we ranked the expression of each fiber by the MYHs used for fiber typing: MYH7, MYH2 and MYH1, corresponding to type 1, type 2A and type 2X fibers, respectively. We then mathematically calculated the bottom knee for each of the resulting curves and used it as a threshold to assign a fiber as being positive (above threshold) or negative (below threshold) for each MYH (Figs. ). These data show that MYH7 (Fig. ) and MYH2 (Fig. ) have a more pronounced on/off expression profile on the RNA level, compared to the protein level. Indeed, at the protein level, very few fibers did not express MYH7 and no fibers had 100% MYH2 expression. Next, we used the determined expression thresholds to assign MYH-based fiber types for all fibers in each dataset. For example, a MYH7 + /MYH2 - /MYH1 - was assigned as type 1, and a MYH7 - /MYH2 + /MYH1 + was assigned as a hybrid type 2A/2X fiber (see Supplementary Table for full description). When combining all fibers, a very similar MYH-based fiber type distribution was observed at the RNA (Fig. ) and protein (Fig. ) levels, with an expected inter-individual variation in relative MYH-based fiber type composition (Supplementary fig. ). Most fibers were considered pure type 1 (34–35%) or type 2A (36–38%), although a substantial number of hybrid 2A/2X fibers (16–19%) were also detected. A striking discrepancy was that pure type 2X fibers could only be detected at the RNA but not at the protein level, indicating that fast MYH expression is, at least in part, post-transcriptionally regulated. We validated the MYH-based fiber typing from the proteomics data against an antibody-based dot blot technique, in which both methodologies were in 100% agreement on the identification of both pure type 1 and type 2A fibers (Supplementary fig. ). However, the increased sensitivity of the proteomics-based approach was superior in identifying hybrid fibers and in quantifying the proportion of each MYH within each fiber. These data demonstrate the efficacy of using an unbiased high-sensitivity omics-based approach for the characterization of skeletal muscle fiber types. We then utilized the full depth of information that transcriptomics and proteomics provide to classify fibers in an unbiased manner based on their whole transcriptome or proteome. Using uniform manifold approximation and projection (UMAP) for dimension reduction of 6 principal components (Supplementary figs. ), we were able to visualize the variation in fibers in the transcriptome (Fig. ) and proteome (Fig. ). Interestingly, fibers did not cluster by participant in either the transcriptomics or proteomics datasets (Supplementary figs. ), nor by test day (Supplementary fig. ), indicating that intra-individual variance in skeletal muscle fibers outweighs inter-individual variance. Two distinct clusters were apparent in the UMAP plots, which represented “fast” and “slow” fibers (Fig. ). MYH7 + (slow) fibers clustered to the positive side of UMAP1, and MYH2 + and MYH1 + (fast) fibers clustered to the negative side of UMAP1 (Fig. ). No distinction between the various fast MYH-based fiber types (i.e., type 2A, type 2X, or hybrid 2A/2X) could be identified, however, suggesting that when the whole transcriptome or proteome is taken into account, the expression of MYH1 (Fig. ), or other classical markers of type 2X fibers like ACTN3 or MYLK2 (Supplementary figs. ), does not discriminate between distinct fiber types. Furthermore, in contrast to MYH2 and MYH7, few transcripts or proteins positively correlate with MYH1 (Supplementary figs. ), suggesting that MYH1 abundance does not adequately reflect the muscle fiber transcriptome/proteome. Similar conclusions can be drawn when assessing the blended expression of the three MYH isoforms on a UMAP level (Supplementary figs. ). Thus, whilst type 2X fibers can be identified at the transcriptional level based solely on the quantification of MYHs, MYH1 + fibers are not distinct from other fast fibers when the whole transcriptome or proteome is considered. As an initial exploration of fiber heterogeneity beyond MYHs, we assessed four established slow fiber type-specific proteins: TPM3, TNNT1, MYL3, and ATP2A2 . In both transcriptomics (Supplementary fig. ) and proteomics (Supplementary fig. ) approaches, the slow isoforms exhibited a high, although not perfect, Pearson correlation coefficient with MYH7. ~25 and 33% of the fibers in the transcriptomics (Supplementary fig. ) and proteomics (Supplementary fig. ) approaches, respectively, were not classified as pure slow fibers by all gene/protein isoforms. Thus, fiber typing based on multiple gene/protein isoforms introduces additional complexity, even with well-known proteins that are assumed to be fiber type-specific. This suggests that fiber typing based on isoforms of a single family of genes/proteins is likely inadequate to capture the true heterogeneity of skeletal muscle fibers. To further investigate the omics-wide phenotypical variability between human skeletal muscle fibers, we applied unbiased dimensionality reduction by principal component analysis (PCA) to our data (Fig. ). Similarly to the UMAP plot, neither participant nor test day influenced the clustering of fibers at the PCA level (Supplementary figs. ). MYH-based fiber type was explained by PC2 in both datasets, displaying clusters for slow type 1 fibers and a second cluster containing the fast type 2A, type 2X and hybrid type 2A/2X fibers (Fig. ). These two clusters were bridged in both datasets by a small number of hybrid type 1/2A fibers. As expected, over-representation analysis of the top PC drivers confirmed that PC2 is driven by contractile and metabolic features (Fig. & Supplementary fig. , Supplementary Dataset - ). In general, the MYH-based fiber types adequately explain the continual variance along PC2, except for the so-called type 2X fibers, which were spread across the entirety of the transcriptomic fast cluster. Unexpectedly, MYH-based fiber type explained only the second greatest degree of variability (PC2), indicating that other biological factors (PC1) independent of MYH-based fiber type, have a substantial role in regulating skeletal muscle fiber heterogeneity. Over-representation analysis of the top drivers in PC1 indicated that variance within PC1 was determined primarily by cell-cell adhesion and ribosomal content in the transcriptome, and costamere and ribosomal proteins in the proteome (Figs. & Supplementary fig. , Supplementary Dataset ). In skeletal muscle, the costamere connects the Z-disk to the sarcolemma and participates in force transmission and signaling . Annotating PCA plots with cell-cell adhesion (transcriptome, Fig. ) and costamere (proteome, Fig. ) features showed a strong shift to the left side of PC1, suggesting an enrichment for these features in some fibers. Closer inspection of fiber clustering at the UMAP level showed a MYH-based fiber type-independent gradient of expression for most features, rather than distinct subclusters of muscle fibers. This continuum holds true for several genes related to pathological conditions (Fig. ), such as CHCHD10 (neuromuscular disorders) , SLIT3 (muscle loss) and CTDNEP1 (muscle disease) . This same continuum was observed in the proteome, with proteins related to neurological diseases (UGDH) , insulin signaling (PHIP) and transcription (HIST1H2AB) (Fig. ). These data collectively show that slow/fast fiber type independent heterogeneity across fibers is of a continual nature. Interestingly, drivers of PC2 showed a good correlation ( r = 0.663) between the transcriptome and proteome (Fig. ), indicating that the slow and fast fiber types, particularly the contractile and the metabolic profile of skeletal muscle fibers, are transcriptionally regulated. However, there was no correlation between the drivers of PC1 in the transcriptome and the proteome (r = -0.027) (Fig. ). This suggests that slow/fast fiber type-independent variance is for a large part post-transcriptionally regulated. Since PC1 variance was largely explained by ribosomal gene ontology terms and given that ribosomes play a profound and specialized role in the cell by actively participating and influencing protein translation , we next set out to explore this unexpected ribosomal heterogeneity. We first colored the proteomics PCA plot based on the relative abundance of the proteins within the “cytosolic ribosome” GOCC term (Fig. ). Although the term was enriched on the positive side of PC1, and a slight gradient could be observed accordingly, ribosomal proteins were driving separation in both directions of PC1 (Fig. ). Amongst the ribosomal proteins enriched in the negative direction of PC1 were RPL18, RPS18 and RPS13 (Fig. ), whilst RPL31, RPL35 and RPL38 (Fig. ) were major drivers in the positive direction of PC1. Interestingly, RPL38 and RPS13 are highly expressed in skeletal muscle when compared to other tissues (Supplementary fig. ). These distinct ribosomal signatures across PC1 could not be observed in the transcriptome (Supplementary fig. ), indicating that this phenomenon is post-transcriptionally regulated. The concept of ribosomal heterogeneity and specialization was previously introduced, where the presence of distinct subpopulations of ribosomes (ribosomal heterogeneity) can directly influence protein translation in different tissues and cells by selectively translating specific mRNA transcript pools (ribosomal specialization). To identify sub-sets of ribosomal proteins that are co-expressed within skeletal muscle fibers, we performed an unsupervised hierarchical clustering analysis of ribosomal proteins within the proteome (Figs. , Supplementary Dataset ). As expected, ribosomal proteins did not cluster by MYH-based fiber type. However, we identified three distinct clusters of ribosomal proteins; the first of which (ribosomal_cluster_1) was coregulated alongside RPL38, therefore elevated in fibers in the positive direction of PC1. The second cluster (ribosomal_cluster_2) was coregulated alongside RPS13 and was elevated in fibers in the negative direction of PC1. A third cluster (ribosomal_cluster_3) displayed no coordinated differential expression within skeletal muscle fibers and could therefore be considered “core” ribosomal proteins within skeletal muscle. Both ribosomal_cluster_1 and ribosomal_cluster_2 contain ribosomal proteins previously demonstrated to regulate selective translation (e.g., RPL10A, RPL38, RPS19 and RPS25) and functionally influence development (e.g., RPL10A, RPL38) – . In line with the results from the PCA analysis, the observed heterogeneous abundance of these ribosomal proteins across fibers was also of a continual nature (Supplementary fig. ). To visualize the position of the ribosomal proteins that display heterogeneity within the ribosome, we utilized a structural model of the human 80S ribosome (Protein Data Bank: 4V6X) (Fig. ). Upon highlighting the ribosomal proteins that belong to the different ribosomal clusters, they were not located in close proximity, showing that our approach did not enrich for a particular area/section of the ribosome. Yet interestingly, ribosomal_cluster_2 contained a lower proportion of large ribosomal subunit proteins than ribosomal_cluster_1 and ribosomal_cluster_3 (Supplementary fig. ). We observed that the majority of the proteins that display variable stoichiometry within skeletal muscle fibers are located on the surface of the ribosome (Fig. ), which is consistent with an ability to interact with internal ribosome entry site (IRES) elements within distinct mRNA populations to coordinate selective translation , . Furthermore, numerous proteins that display variable stoichiometry within skeletal muscle fibers are located close to functional regions, such as the mRNA exit tunnel (Fig. ), which can selectively regulate translation elongation and arrest of specific nascent peptides . Overall, our data identifies that skeletal muscle ribosomal protein stoichiometry displays heterogeneity, driving variance between skeletal muscle fibers. We next set out to identify features of fast and slow skeletal muscle fibers and how these are transcriptionally regulated. Comparing the fast and slow clusters defined in the UMAPs of both datasets (Figs. & inlays Fig. ), transcriptome and proteome analysis yielded 1366 and 804 differentially abundant features, respectively (Figs. , Supplementary Dataset – ). Expected differences in sarcomeric (e.g., tropomyosin and troponin), excitation-contraction coupling (SERCA isoforms) and energy metabolism-related (e.g., ALDOA and CKB) features were observed. In addition, transcripts and proteins regulating protein ubiquitination displayed differences between fast and slow fibers (e.g., USP54, SH3RF2 , USP28 and USP48) (Fig. ). Furthermore, the microprotein-encoding gene RP11-451G4.2 (DWORF) , which has previously been shown to be differentially expressed between lamb muscle fiber types and to enhance SERCA activity in cardiac muscle , was significantly up-regulated in slow skeletal muscle fibers (Fig. ). Also at the single fiber level, clear differences could be observed for known features such as the metabolism-related isoforms of lactate dehydrogenase (LDHA and LDHB, Figs. and Supplementary fig. ) , , as well as previously unknown fiber type-specific features (e.g., IRX3 , USP54 , USP28, and DPYSL3) (Fig. ). There was a reasonable overlap of differentially expressed features (Supplementary fig. ) and correlation of fold changes between the transcriptomic and proteomic datasets, primarily driven by the large expression differences in sarcomeric features (Supplementary fig. ). Notably, some features (e.g., USP28, USP48, GOLGA4, AKAP13) showed strong post-transcriptional regulation with slow/fast fiber type-specific expression profiles only at the proteome level (Supplementary fig. ). We next performed over-representation analysis on the differentially abundant genes and proteins (Figs. , Supplementary Dataset ). Enriched pathways of features that were differential in both datasets showed expected differences such as fatty acid beta-oxidation and ketone metabolic process (slow fibers), myofilament/muscle contraction (fast and slow fibers, respectively) and carbohydrate catabolic process (fast fibers). Serine/Threonine protein phosphatase activity was also enriched in fast fibers, driven by features such as phosphatase regulatory and catalytic subunits (PPP3CB, PPP1R3D, and PPP1R3A), known to regulate glycogen metabolism (Supplementary figs. ). Other enriched pathways in fast fibers were Processing (P-) bodies (YTHDF3, TRIM21, LSM2) in the proteome (Supplementary fig. ), possibly related to post-transcriptional regulation , and transcription factor activity ( SREBF1 , RXRG , RORA ) in the transcriptome (Supplementary fig. ). Slow fibers showed enrichment for oxidoreductase activity ( BDH1 , DCXR , TXN2 ) (Supplementary fig. ), amide binding ( CPTP , PFDN2 , CRYAB ) (Supplementary fig. ), extracellular matrix ( CTSD , ADAMTSL4 , LAMC1 ) (Supplementary fig. ) and receptor-ligand activity ( FNDC5 , SPX , NENF ) (Supplementary fig. ). To get more insights into the transcriptional regulation of slow/fast fiber type signatures, we performed transcription factor enrichment analysis using SCENIC (Supplementary Dataset ). Many transcription factors were significantly enriched between fast and slow fibers (Fig. ). This included transcription factors such as MAFA , previously linked to the development of fast fibers , but also multiple transcription factors not previously linked to the fiber type specific gene program. These included PITX1 , EGR1 and MYF6 as the most enriched transcription factors within fast fibers (Fig. ). Conversely, ZSCAN30 and EPAS1 (also known as HIF2A ) were the most enriched transcription factors within slow fibers (Fig. ). In line with this, MAFA expression levels were higher in the UMAP area corresponding to fast muscle fibers, whereas an opposite expression pattern was observed for EPAS1 (Fig. ). Alongside known protein-coding genes, there exists a multitude of non-coding RNA biotypes, potentially involved in the regulation of human development and disease , . Several non-coding RNAs displayed fiber type specificity in the transcriptomics dataset (Figs. & Supplementary Dataset ), including LINC01405 , which is very specific to slow fibers and is reported to be downregulated in muscle of mitochondrial myopathy patients . Conversely, RP11-255P5.3 , corresponding to the lnc-ERCC5-5 gene ( https://lncipedia.org/db/transcript/lnc-ERCC5-5:2 ) displayed fast fiber type-specificity. Both LINC01405 ( https://tinyurl.com/x5k9wj3h ) and RP11-255P5.3 ( https://tinyurl.com/29jmzder ) display specificity to skeletal muscle (Supplementary figs. ) and have very few or no known contractile genes within a 1 Mb genomic neighborhood, suggesting a specialized role in fiber type regulation rather than a regulatory role for a neighboring contractile gene. The slow/fast fiber type-specific specific expression profiles of LINC01405 and RP11-255P5.3 were independently validated using RNAscope (Fig. ). Recently, it has become apparent that numerous assumed non-coding transcripts code for transcribed microproteins, some of which regulate muscle function , . To identify microproteins with potential fiber type specificity, we searched our 1000 fiber proteomic dataset using a custom FASTA file containing sequences from the detected non-coding transcripts ( n = 305) from the 1000 fiber transcriptome dataset (Fig. ). This resulted in the identification of 197 microproteins arising from 22 distinct transcripts, of which 71 microproteins display differential regulation between slow and fast skeletal muscle fibers (Supplementary figs. and Supplementary Dataset ). Three microprotein products were identified for LINC01405 , of which one displays a slow-fiber type specificity similar to its transcript (Figs. and Supplementary fig. ). Thus, we identify LINC01405 as a microprotein-encoding gene, which displays specificity for slow skeletal muscle fibers. Having developed a comprehensive workflow to characterize the proteome of single muscle fibers at scale and discovered regulators of fiber heterogeneity in the healthy state, we applied this pipeline to understand how nemaline myopathy impacts skeletal muscle fiber heterogeneity. Nemaline myopathy is a genetic muscular disorder that causes muscle weakness, resulting in a range of complications for the affected children including respiratory difficulties, scoliosis, and physical immobility , . Typically, in nemaline myopathy, pathogenic variants in genes such as actin alpha 1 ( ACTA1 ) drive the fiber type composition towards slow fiber predominance, although this effect is heterogenous. The only clear exception is the troponin T1 ( TNNT1 ) nemaline myopathy, in which a fast fiber predominance is seen. Thus, a deeper characterization of the heterogeneity behind the skeletal muscle fiber dysregulation observed in nemaline myopathies may help untangle the complex relationship between these diseases and muscle fiber types. Muscle fibers isolated from patients with ACTA1 - and TNNT1 -mutation derived nemaline myopathies display substantial myofiber atrophy or hypotrophy, compared to healthy controls ( n = 3 per group) (Figs. , Supplementary Table ), which provides considerable technical challenge arising from the limited material for proteomic analysis. Nonetheless, we were able to detect 2485 proteins from 272 skeletal muscle fibers. After filtering for a minimum of 1000 quantified proteins per fiber, downstream bioinformatic analyses were performed on 250 fibers. On average of 1573 ± 359 proteins were quantified per fiber after filtering (Supplementary figs. , Supplementary Dataset – ). Importantly, only a modest reduction in proteome depth was apparent in samples from patients with nemaline myopathy, despite the markedly reduced fiber size. Furthermore, processing of this data with our custom FASTA file (including non-coding transcripts) identified five microproteins within the skeletal muscle fibers from nemaline myopathy patients (Supplementary Dataset ). A wide dynamic range was apparent in the proteome, whilst the shared proteins in the control participants correlated well with those in the previous analysis of the 1000 fiber proteome study (Supplementary fig. ). As nemaline myopathies influence the MYH-based fiber type proportions within skeletal muscle , , we first investigated the MYH-based fiber type of our nemaline myopathy patients and controls. Fiber type was determined using the unbiased approach described previously for the 1000 fiber studies (Supplementary figs. ) and once again pure type 2X fibers could not be identified (Fig. ). We observed a heterogeneous effect of nemaline myopathies on fiber type, as two patients with ACTA1 mutations displaying an increased proportion of type 1 fibers, whilst two patients with TNNT1 -nemaline myopathy displayed a reduced proportion of type 1 fibers (Fig. ). Indeed, MYH2 and the fast troponin isoforms were downregulated in ACTA1 -nemaline myopathy (TNNC2, TNNI2, and TNNT3), whilst MYH7 was downregulated in TNNT1 -nemaline myopathy (Supplementary fig. ). This is in line with previous reports of heterogeneous fiber type switching in nemaline myopathies , . We validated these findings using immunohistochemistry, finding type 1 fiber predominance in ACTA1 -nemaline myopathy patients, while the opposite was observed in TNNT1-nemaline myopathy patients (Fig. ). At the single-fiber proteome level, skeletal muscle fibers from ACTA1 - and TNNT1 -nemaline myopathy patients clustered away from the majority of control fibers, with TNNT1- nemaline myopathy fibers tending to be the most severely affected (Fig. ). This was particularly apparent when we produced a PCA plot of pseudo-bulked fibers for each patient, with TNNT1 -nemaline myopathy patients 2 & 3 lying furthest from the control samples (Supplementary figs. , Supplementary Dataset ). To further understand how fibers from myopathy patients compare to the healthy condition, we capitalized on the depth of information provided by our analysis in 1000 fiber proteome study from healthy adult participants. We projected the fibers from our myopathy data set (both ACTA1 - and TNNT1 -nemaline myopathy patients and controls) onto the PCA determined from the 1000 fiber proteome study (Fig. ). Control fibers displayed a similar distribution of MYH-based fiber type along PC2 as the 1000 fiber proteome study. However, the majority of fibers from nemaline myopathy patients shifted downwards along PC2, overlapping with the healthy fast fibers, irrespective of their own MYH-based fiber type. Thus, despite evidence for a fiber type shift towards type 1 fibers in ACTA1 -nemaline myopathy patients when quantified using MYH-based approaches, both ACTA1 - and TNNT1 -nemaline myopathies shift the skeletal muscle fiber proteome to resemble fast fibers. We next directly compared each patient group with the samples from healthy controls, resulting in 256 and 552 differentially abundant proteins in ACTA1 - and TNNT1 -nemaline myopathies, respectively (Figs. & Supplementary figs. , Supplementary Dataset ). Gene set enrichment analysis identified the coordinated reduction of mitochondrial proteins (Figs. , Supplementary Dataset ). Surprisingly, this was completely independent of MYH-based fiber type (Figs. & Supplementary fig. , Supplementary Dataset ), despite the divergent fiber type predominance between ACTA1 - and TNNT1 -nemaline myopathies. Three microproteins were also regulated by either ACTA1 - or TNNT1 -nemaline myopathies. Two of those microproteins, ENSG00000215483_TR14_ORF67 (also known as LINC00598 or Lnc-FOXO1 ) and ENSG00000229425_TR25_ORF40 ( lnc-NRIP1-2 ) displayed differential abundance only within type 1 fibers, with ENSG00000215483_TR14_ORF67 having previously reported to play a role in cell cycle regulation . On the other hand, ENSG00000232046_TR1_ORF437 (corresponding to LINC01798 ) was upregulated in both type 1 and type 2A fibers from ACTA1- nemaline myopathy when compared to healthy controls (Supplementary figs. , Supplementary Dataset ). Conversely, ribosomal proteins were largely unaffected by nemaline myopathies, although RPS17 was downregulated in ACTA1- nemaline myopathy (Fig. ). Enrichment analysis also identified the up-regulation of immune system process in both ACTA1 - and TNNT1 -nemaline myopathies, as well as cell adhesion in TNNT1 -nemaline myopathy (Fig. ). The enrichment of these extracellular terms, was reflected by extracellular matrix proteins driving the PCA in the negative direction in both PC1 and PC2, i.e., towards the most affected fibers (Fig. ). Both patient groups overexpressed extracellular proteins involved in immune response and the sarcolemma repair machinery, such as annexins (ANXA1, ANXA2, ANXA5) , and their interactor S100A11 (Supplementary figs. ). This process has been previously reported to be upregulated in muscle dystrophies , yet to our knowledge it has not been associated with nemaline myopathies before. A normal function of this molecular machinery is required for both sarcolemma membrane repair upon injury and for the fusion of new myoblasts to muscle fibers , . Therefore, an up-regulation of this process in both patient groups suggests a reparative response to damage caused by myofibril instability. The effects of each nemaline myopathy were well correlated (r = 0.736) and displayed reasonable overlap (Supplementary fig. ), indicating that ACTA1 - and TNNT1 -nemaline myopathies induce similar effects on the proteome. Nonetheless, a number of proteins displayed regulation in only ACTA1 - or TNNT1 -nemaline myopathies (Supplementary fig. ). MFAP4, a pro-fibrotic protein, was one of the most up-regulated proteins in TNNT1 -nemaline myopathy whilst unchanged in ACTA1 -nemaline myopathy. SKIC8, a component of the PAF1C complex, which regulates the transcription of HOX genes , was down-regulated in TNNT1 -nemaline myopathy but was unaffected in ACTA1 -nemaline myopathy (Supplementary fig. ). Direct comparisons between ACTA1 - and TNNT1 -nemaline myopathies identify a greater effect of TNNT1 -nemaline myopathies on the reduction of mitochondrial proteins and the increase of immune system proteins (Fig. & Supplementary figs. & Supplementary fig. ). These data are consistent with the greater degree of atrophy/hypotrophy apparent in TNNT1 - compared to ACTA1 -nemaline myopathies (Fig. ), indicating that TNNT1 -nemaline myopathy is the more severe form of the disease. In order to assess whether the observed effects of nemaline myopathy were also present at the whole muscle-level, we conducted a bulk proteome analysis on muscle biopsies from the same TNNT1- nemaline myopathy patients and compared them against control individuals ( n = 3 per group) (Supplementary fig. , Supplementary Dataset ). As expected, upon PCA, control individuals tightly clustered together whereas, similarly to the single fiber analysis, and TNNT1- nemaline myopathy patients displayed a higher inter-sample variance (Supplementary fig. ). The bulk analysis was able to recapitulate the differentially expressed proteins (Supplementary figs. , Supplementary Dataset ) and biological processes (Supplementary fig. , Supplementary Dataset ) highlighted in the single fiber comparison, although lost the potential to discriminate between fiber types, and did not account for the heterogeneous effect of the disease across different fibers. Together, these data demonstrate that single muscle fiber proteomics can illuminate clinical biology that is unapparent using targeted approaches such as immunoblotting. Furthermore, these data highlight the limitations of solely relying upon MYH-based fiber typing to describe phenotypic adaptations. Indeed, despite divergent MYH-based fiber type switching in actin and troponin nemaline myopathies, both nemaline myopathies uncouple the MYH-based fiber type from skeletal muscle fiber metabolism, shifting towards a faster, less oxidative muscle proteome. Cellular heterogeneity is important to enable tissues to meet a wide range of demands. In skeletal muscle, this has classically been described by fiber types characterized by varying degrees of force production and fatigability. It is apparent, however, that this explains just a fraction of the variation in skeletal muscle fibers, with far greater variability, complexity and nuance residing in skeletal muscle fibers than previously believed. With the advent and development in technologies, these elements that regulate skeletal muscle fibers are now able to be elucidated. Indeed, our data indicates that type 2X fibers may not be a distinct sub-classification of skeletal muscle fibers. Furthermore, we identify metabolic, ribosomal, and cell junction proteins as major determinants of skeletal muscle fiber heterogeneity. By applying our proteomics workflow to samples from nemaline myopathy patients, we further evidence that MYH-based fiber typing does not capture the complete heterogeneity of skeletal muscle, particularly when the system is perturbed. Indeed, nemaline myopathies induce a shift towards faster less oxidative fibers regardless of their MYH-based fiber type. Skeletal muscle fibers have been classified since the 19 th century , . Recent omics-based analyses have enabled us to start understanding expression profiles and responses to various stimuli specific to distinct MYH-based fiber types – , , , , . As demonstrated herein, omics approaches also have the advantage of increased sensitivity in the quantification of fiber type markers over traditional antibody-based approaches, whilst also not relying on the quantification of a single (or few) markers for determining skeletal muscle fiber type. We leveraged complementary transcriptomics and proteomics workflows, and integrated the results, to explore the transcriptional and post-transcriptional regulation of human skeletal muscle fiber heterogeneity. This pipeline resulted in the observation that pure type 2X fibers could not be identified at the protein level in vastus lateralis skeletal muscle of our cohort of young healthy males. This is in line with previous single fiber studies identifying < 1% pure type 2X fibers in healthy vastus lateralis , although this should be confirmed in other muscles in the future. The discrepancy between the identification of near pure type 2X fibers at the mRNA level but only hybrid type 2A/2X fibers at the protein level is puzzling. mRNA expression of MYH isoforms is not circadian , indicating that it is unlikely that we have simply “missed” the on-signal of MYH2 in the apparently pure type 2X fibers at the RNA level. One possible explanation may be differences in the protein and/or mRNA stability of MYH isoforms, though this is purely hypothetical. Indeed, no fast fiber was 100% pure for any MYH isoform, though whether levels of MYH1 mRNA expression in the range of 70–90% could result in similar MYH1 and MYH2 abundance at the protein level is unclear. Nonetheless, when the whole transcriptome or proteome is considered, clustering-based analyses could only confidently identify two distinct clusters, which represented slow and fast skeletal muscle fibers regardless of their exact MYH composition. This is consistent with analyses using single nuclei transcriptomics approaches which commonly identify only two distinct clusters of myonuclei – . Furthermore, whilst previous proteomics-based studies have identified type 2X fibers, these fibers did not cluster away from the rest of the fast fibers and only displayed a handful of differentially abundant proteins compared to other MYH-based fiber types . These findings indicate that we should revert to the view of muscle fiber classification from the early twentieth century and classify human skeletal muscle fibers not into three distinct classifications based on MYHs, but instead into just two clusters based on their metabolic and contractile properties . Better still, we should consider muscle fiber heterogeneity in multiple dimensions. Previous omics studies already pointed in this direction, by showing that skeletal muscle fibers do not form discrete clusters, yet fall along a continuum , , , , . Here, we found that over and above the variance within the contractile and metabolic signatures of skeletal muscle, fibers could also be separated by features related to the cell junction and translation machinery. Indeed, we identified ribosomal heterogeneity across skeletal muscle fibers that drives a slow/fast-fiber type independent heterogeneity. The underlying reason for such vast slow/fast-fiber type independent heterogeneity amongst fibers is not immediately obvious, though this could allude to a specialized spatial organization within a muscle fascicle to enable an optimal response to specific forces and loads , specialized cellular or organ communication with other cell types within the muscle microenvironment – , or differential ribosomal activity in individual fibers. Indeed, ribosomal heterogeneity, via paralog substitution of RPL3 and RPL3L or at an rRNA 2′O-methyl level, has been implicated in skeletal muscle hypertrophy , . Multi-omics and spatial applications in concert with functional characteristics of single muscle fibers will further advance our understanding of muscle biology on a multi-omics level . In analyzing the proteome of single muscle fibers from patients with nemaline myopathies, we also demonstrate the utility, power, and applicability of single muscle fiber proteomics in uncovering clinical pathophysiology in skeletal muscle. Furthermore, by comparing our workflow against a bulk proteome analysis, we were able to demonstrate that single muscle fiber proteomics is able to capture the same depth of information than bulk tissue omics and expand on it by including inter-fiber heterogeneity and muscle fiber type into consideration. In addition to observing expected, albeit divergent, differences in fiber type proportions in ACTA1 - and TNNT1 -nemaline myopathies compared to healthy controls , we also identified oxidative and extracellular remodeling, which is uncoupled from this MYH-based fiber type switch. Fibrosis has previously been reported for TNNT1 nemaline myopathy . However, our analysis builds upon this to also identify the up-regulation of stress-related secreted proteins within the extracellular space, such as annexins, involved in the sarcolemma repair machinery – in fibers from both ACTA1 - and TNNT1 -nemaline myopathy patients. Overall, the upregulation of annexins in muscle fibers from nemaline myopathy patients may represent a cellular response to rescue severely atrophying fibers. Whilst this study represents the largest omic analysis of intact human single muscle fibers to date, it is not without limitations. We isolated skeletal muscle fibers from a relatively small and homogenous sample of participants and from a single muscle ( vastus lateralis ). In this respect, it is impossible to rule out the existence of specific fiber populations in different muscle types and at the extremes of muscle physiology. For example, we cannot rule out that a sub type of ultra-fast fibers (e.g., pure type 2X fibers) may become apparent in highly trained sprint and/or power athletes or during muscle disuse , . Furthermore, our limited participant pool did not allow us to investigate sex differences in fiber heterogeneity, in which there are known differences in fiber type proportions between males and females. Furthermore, we were unable to perform transcriptomics and proteomics on the same muscle fibers or in samples from the same participants. As we and others continue to optimize single-cell and single muscle fiber omics-analyses towards ultra-low sample input (as demonstrated here with the analysis of fibers from nemaline myopathy patients), the possibility of combining multi-omics (and functional) approaches in a single muscle fiber is tantalizingly close. Collectively, our data identifies and explains transcriptional and post-transcriptional drivers of heterogeneity within skeletal muscle. In particular, we provide data to question long-standing dogmas within skeletal muscle physiology related to the classical definitions of MYH-based fiber types. In doing so we hope to reignite the debate and ultimately redefine our understanding of skeletal muscle fiber classifications and heterogeneity. 1000 fiber transcriptomics Participant information Fourteen participants (12 males / 2 females) of Caucasian origin volunteered to take part in this study, which was approved by the Ethical Committee of Ghent University Hospital (BC-10237), in agreement with the 2013 Declaration of Helsinki and registered on ClinicalTrials.gov (NCT05131555). General characteristics of the participants can be found in Supplementary Table . After oral and written informed consent, participants were medically screened before final inclusion. Participants were young (22-42 years old), healthy (no diseases and non-smoking) and moderately physically active. Maximal oxygen uptake was determined as marker of physical fitness during a graded incremental cycling test, as previously described . Muscle biopsy collection Muscle biopsies were collected in the rested and fasted state, on three different days, separated by 14 days. As these samples were collected as part of a larger study, on each of these days participants ingested a placebo (lactose), an H1-receptor antagonist (540 mg fexofenadine) or an H2-receptor antagonist (40 mg famotidine) 40 minutes before muscle biopsy collection. We have previously shown that these histamine receptor antagonists do not affect the resting skeletal muscle state , and no clustering was apparent based on condition in our quality control plots (Supplementary figs. & Supplementary fig. ). Dietary intake was standardized 48 hours before each experimental day (41.4 kcal/kg body weight, 5.1 g/kg body weight carbohydrates, 1.4 g/kg body weight protein and 1.6 g/kg body weight fat per day), followed by a standardized breakfast on the morning of the experimental day (1.5 g/kg bodyweight carbohydrates). Muscle biopsies of the m. vastus lateralis were then collected after local anesthesia (0.5 mL of 1% Xylocaïne without epinephrine) using the percutaneous Bergström technique with suction . The muscle samples were immediately submerged in RNA later and stored at 4 °C until manual fiber dissection (max. 3 days). Single fiber isolation Freshly excised muscle fiber bundles were transferred to fresh RNA later in a petri dish. Individual muscle fibers were then manually dissected using a stereomicroscope and fine forceps. Twenty-five fibers were dissected per biopsy, with special care to select fibers from different sections of the biopsy. After dissection, each fiber was carefully submerged in 3 µL of lysis buffer (SingleShot Cell Lysis kit, Bio-rad), containing proteinase K and DNase enzymes to remove unwanted proteins and DNA. Next, cell lysis and protein/DNA removal was initiated by short vortexing, spinning liquid down in a microcentrifuge and incubation at room temperature (10 min). Lysates were then incubated for 5 min at 37 °C and 5 min at 75 °C in a thermocycler (T100, Bio-Rad), immediately followed by storage at −80 °C until further processing. Sequencing library preparation Illumina-compatible libraries from polyadenylated RNA were prepared from 2 µL of the muscle fiber lysates using the QuantSeq-Pool 3’ mRNA-Seq library prep kit (Lexogen). Detailed methodology can be found in the manufacturer guidelines. The process was initiated by reverse transcription for first strand cDNA synthesis, during which Unique Molecular Identifiers (UMIs) and sample-specific i1 barcodes were introduced enabling sample pooling and reducing technical variability in the downstream process. Next, cDNA from 96 fibers was pooled and purified using magnetic beads, followed by RNA removal and second strand synthesis by random priming. Libraries were purified using magnetic beads, followed by addition of pool-specific i5/i7 indices and PCR amplification. A last purification step was performed, finalizing the Illumina-compatible libraries. A high sensitivity small DNA Fragment Analysis kit (Agilent Technologies, DNF-477-0500) was used to assess the quality of each library pool. Illumina sequencing The individual pools were further equimolarly (2 nM) pooled, based on Qubit-quantified concentrations. The final pool was subsequently sequenced with a NovaSeq S2 kit (1 × 100 nucleotides) with a loading of 2 nM (4% PhiX) in standard mode on a NovaSeq 6000 instrument. Primary data processing Our pipeline was based on the QuantSeq Pool data analysis pipeline from Lexogen ( https://github.com/Lexogen-Tools/quantseqpool_analysis ). First, the data was demultiplexed based on the i7/i5 indices with bcl2fastq2 (v2.20.0). The next demultiplexing step was performed via idemux (v0.1.6) according to the i1 sample-specific barcodes in read 2, followed by extraction of UMI sequences with umi_tools (v1.0.1). Trimming of the reads was then performed in multiple rounds with cutadapt (v3.4), with removal of too short reads (length <20) or reads consisting entirely of adapter sequences. Reads were then aligned to the human genome with STAR (v2.6.0c), followed by BAM file indexing with SAMtools (v1.11). Duplicates of reads were removed with umi_tools (v1.0.1). Finally, counting of the alignments was performed with featureCounts from Subread (v2.0.3). At several intermediate steps during the pipeline, quality control with performed with FastQC (v0.11.9). Initial Seurat processing All further bioinformatics processing and visualization was performed in R (v4.2.3), primarily with the Seurat (v 4.4.0) workflow . The individual UMI counts and metadata matrices were thus transformed into a Seurat object. Genes with expression in less than 30% of all fibers were removed. Low-quality samples were then removed based on a minimum threshold of 1000 UMI counts and 1000 detected genes. This resulted in a total of 925 fibers that passed all quality control filtering steps. Normalization of UMI counts was performed using the SCTransform v2 Seurat method , including all 7418 detected features and regressing out participant variation. All relevant metadata can be found in Supplementary Dataset . Proteomics Sample collection - 1000 fiber proteome Participant information Stored biobank muscle specimens were used for the purpose of the present study (Clinicaltrials.gov identifier: NCT04048993). The specimens were collected from five active and healthy male volunteers (aged 21–35 years) of Caucasian ancestry who gave their written and oral informed consent with approval from the Science Ethics Committee of the Capital Region in Denmark (H-1-2012-090) and complied with the guidelines of the 2013 Declaration of Helsinki. General characteristics of the participants can be found in Supplementary Table . Participants were young, healthy (no diseases and non-smoking) and moderately physically active. Muscle biopsy collection Participants arrived in the morning after an overnight fast and rested in the supine position for 1 hour. Then, local anesthesia (2-3 mL Xylocaine 2%; lidocaine without epinephrine, AstraZeneca, Denmark) was applied under the skin above the fascia at the belly of the m. vastus lateralis muscle. A muscle biopsy was sampled through a small 3–4 mm incision using a Bergström needle with suction. The muscle biopsy specimen was snap-frozen in liquid nitrogen and stored at −80 °C until analysis. Single muscle fiber isolation Muscle fibers were isolated from freeze-dried specimens as previously described . In brief, muscle biopsies were freeze-dried for 48 hours. Subsequently, fibers were isolated in a humidity- and temperature-controlled room (humidity of 25%) using fine forceps under a stereomicroscope. ~200 single muscle fibers were isolated for each biopsy, resulting in a total of 1038. To ensure the fibers settled at the bottom of the tube, each fiber-containing tube underwent centrifugation at 20,000 g using a small centrifuge. Next, fibers were resuspended in 15 µL of lysis buffer (1% sodium dodecyl sulfate (SDC), 40 mM chloroacetamide (CAA), 10 mM dithiothreitol (DTT) in 50 mM Tris pH 8.5). Sample collection – myopathy Participant information Six patients with severe nemaline myopathy were selected from our nemaline myopathy study cohort. Three patients (2 male and 1 female) had pathogenic variants in ACTA1 , representing the conventional severe form, and three patients had pathogenic variants in TNNT1 (3 male), resulting in a rare, progressive form of nemaline myopathy. Three healthy individuals with no history of neuromuscular disease were used as controls. All participants are of Caucasian ancestry (Supplementary Table ). Muscle biopsy collection Healthy control participant biopsies ( n = 3 males) were used from an original study , and were therefore collected, snap frozen in liquid nitrogen and stored in -80°C as per that originally described. For the present study, a fragment of this stored biopsy was dissected under sterile, frozen conditions before being prepared for single myofiber isolation (detailed below). Acquisition of biopsies of healthy control patients was approved by local ethics committee (Copenhagen and Frederiksberg) in Denmark (hs:h-15002266). Those of myopathy patients were consented, stored, and used in accordance with the Human Tissue Act under local ethical approval in United Kingdom (REC 13/NE/0373). All procedures were carried out in accordance to the Declaration of Helsinki. Single fiber isolation Dissected fragments of muscle biopsy were placed in ice-cold, 22 micron filtered relaxing solution (4 mM Mg-ATP, 1 mM free Mg 2+ , 10 -6.00 mM free Ca 2+ , 20 mM imidazole, 7 mM EGTA, 14.5 mM creatine phosphate, KCl to an iconic strength of 180 mM and pH to 7.0) for ~3 minutes before being immersed in fresh relaxing solution on a sterile petri dish and mounted on ice under a dissection microscope for single fiber isolation. Fibers were cleaved from the tissue/biopsy ensuring a variety of sample/biopsy locations were used and that only single fibers were selected. Following isolation, fibers were manually moved to a sterile 96-well plate containing 15 µL of lysis buffer (identical to that detailed above) where the tweezers containing the fiber were submerged, spun and agitated in lysis buffer to ensure the fibers dissociated from the tweezers. To ensure fibers were settled to the bottom of the well, the 96-well plate was subjected to gentle vortexing and centrifugation (1,000 g). Proteomics analysis Sample preparation Samples from both proteomics studies followed the same sample preparation workflow. In order to extract the proteins, samples were boiled at 95 °C in a thermomixer with gentle shaking (800 rpm) and sonicated in a bioruptor instrument with 30 seconds on/off cycles for 15 minutes. A small 5 µL fraction of lysate from each sample was saved for antibody-based fiber typing of the 1000 fiber samples. Next, samples were processed following a modified version of the in-solution digestion sample preparation protocol. In brief, total volume was adjusted to 50 µl by addition of digestion buffer, containing 50 mM Tris PH 8.5 buffer, an enzyme to protein ratio of 1:500 LysC (Wako) and a 1:100 enzyme to protein ratio of trypsin (Promega). Single muscle fiber lysates were digested overnight in a thermomixer set to 37° C and 800 rpm. The next day, protein digestion was quenched by addition of 50 µl of 2% trifluoroacetic acid (TFA) in isopropanol. Peptides were desalted using in-house prepared single-use reverse-phase StageTips containing styrenedivinylbenzene reverse-phase sulfonate (SDB-RPS) disks. Then, desalted peptides were loaded in Evotips (Evosep) following manufacturer instructions prior LC-MS/MS analysis. Bulk tissue samples were prepared using the same protocol utilized for single fibers, with a few modifications to sample lysis. Tissue samples were first powdered using a tissue crusher over dry ice before resuspending the powder in the same lysis buffer described above. Then, the samples were homogenized using an IKA Turrax homogenizer for 2 minutes prior boiling and sonication. From there onwards the samples underwent the same protocol described above. Proteomics library preparation Fibers from the five healthy control individuals participating in the 1000 fiber study were carefully dissected and combined in order to create a pooled fiber lysate. Then, 200 µg of protein corresponding to each participant-specific lysate were pooled together into one final protein lysate that was processed following the same sample preparation workflow just described. 20 µg of desalted peptides were fractionated using High PH Reverse-Phase Chromatography (HpH-RP). Fractionation was carried out on a Kinetex 2.6 µm EVO C18 100 Å, 150 ×0.3 mm column manufactured by Phenomenex and using an EASY-nLC 1200 System (Thermo) operating at 1.5 µL/min. Separation was accomplished using a 62 min step-gradient starting from 3% to 60% solvent B (which consisted of 10 mM TEAB in 80% acetonitrile) and solvent A (containing 10 mM TEAB in water). The total run time was 98 min, which included wash and column equilibration. Throughout the fractionation, peptides were eluted and collected every 60 s, obtaining 96 single fractions without concatenation. Finally. 200 ng of HpH-RP fractionated peptides were loaded, concentrated and desalted on Evotips (Evosep) following the instructions provided by the manufacturer. Liquid chromatography tandem mass spectrometry Proteomics measurements were performed using LC-MS instrumentation consisting of an Evosep One HPLC system (Evosep) coupled via electrospray ionization to a timsTOF SCP mass spectrometer (Bruker). Peptides were separated utilizing 8 cm, 150 μM ID columns packed with C18 beads (1.5 μm) (Evosep). Chromatographic separation was achieved by the ‘60 samples per day’ method, followed by electrospray ionization through a CaptiveSpray ion source and a 10 μm emitter into the MS instrument. Single muscle fiber peptides were measured in DIA-PASEF mode following a previously described method , while library fractions were measured using DDA-PASEF. In brief, the DDA-PASEF scan range encompassed 100–1700 m/z for both MS and MS/MS, and TIMS mobility range was set to 0.6–1.6 (V cm −2 ). Both TIMS ramp and accumulation times were configured to 100 ms, and 10 PASEF ramps were recorded for a total cycle time of 1.17 s. The MS/MS target intensity and intensity threshold were defined as 20.000 and 1.000, respectively. An exclusion list of 0.4 min for precursors within 0.015 m/z and 0.015 V cm −2 width was also activated. For DIA-PASEF the scan range was established at 400-1000 (m/z), the TIMS mobility range to 0.64-1.37 (V cm −2 ), and ramp and accumulation times were both set to 100 ms. A short-gradient method was used, which included 8 DIA-PASEF scans with three 25 Da windows per ramp, resulting in an estimated cycle time of 0.95 sec. MS data processing Library files were processed using the MS-Fragger functionality within Fragpipe v19.0 under the SpecLib workflow with default settings, including a minimum peptide length of seven amino acids and maximum of two missed cleavages allowed . Spectra were searched against a human reviewed FASTA from Uniprot (March 2022, 20410 entries) and the output library contained a total of 5350 protein groups and 84383 precursors. Sample raw MS files were analyzed using DIA-NN version 1.8 in a library-based manner against the MS library just described. Protein groups quantification was based on proteotypic peptides, neural network was set to double-pass mode, quantification strategy was set to “Robust LC (high accuracy)” and the match between runs options was enabled, the rest of parameters remained as default, which included precursor FDR set to 1% and peptide length of 7-30 amino acids. Data processing Further data analysis was performed under the R environment (R version 4.22). Both a metadata data frame containing sample and participant information and the “PG_matrix.tsv” file from DIA-NN’s output were loaded in RStudio. The 1000 fiber data frame was filtered to remove samples with less than 50% valid protein intensity values, resulting in a total of 974 fibers. Next, rows were filtered to remove proteins with less than 30% valid values across samples, resulting in a total of 1685 proteins. Regarding the myopathy dataset, after filtering samples for 50% valid values, the number of samples was 250. We included in the analysis proteins that were quantified in 70% of the samples in at least one condition (conditions: control, actin myopathy and troponin myopathy), resulting in a total of 1545 proteins. Both data frames were then log2 transformed and normalized using the normalizeBetweenArrays function from the limma package (v 3.54.2), with the method argument set to quantile . Then, batch correction through the ComBat function from the sva package (v3.50.0) was applied to minimize the effect of the three technical batches originated during mass spectrometry measurement. Finally, missing values were replaced by random numbers from a Gaussian distribution with the default settings of the tImpute function from the PhosR package (v 1.12.0) . All relevant metadata for the 1000 fiber proteome and nemaline myopathy datasets can be found in Supplementary Data & , respectively. Bioinformatics analysis Transcriptome and proteome dynamic range The expression/intensity for each gene/protein was calculated relative to the total counts/intensity for each fiber. This value was then averaged across fibers in each dataset and log10-transformed. The overlap of detected features between both datasets were analyzed using the VennDiagram package (v 1.7.3). Coefficient of variation—proteomics For each of the 96 well plates used during the MS measurement of the 1000 fiber study, one technical control sample was included in A1 position to monitor total ion current intensity and quality control of the runs (a total of eleven technical controls). The coefficient of variation between proteins was calculated by dividing the standard deviation by the mean of the LFQ intensities from each protein across technical replicates and then multiplied by one hundred. Correlation analyzes Mean log2 transformed transcript counts, protein intensities, and/or fold change values across fibers were calculated, filtered for shared proteins/genes and Pearson correlation was calculated. Omics-based fiber typing Normalized counts and raw LFQ intensities were retrieved from well-described contractile proteins that have a slow (MYH7, TNNT1, TPM3, ATP2A2 and MYL3) and fast (MYH2, MYH1, TNNT3, TPM1, ATP2A1 and MYL1) isoforms . For each isoform combination, the relative expression of each was then calculated and samples were ordered from high to low. The mathematical bottom knee for each curve was then determined using the barcodeRanks function in the DropletUtils package (v 1.18.1). This threshold was used to assign fiber types as pure (type 1, type 2A or type 2X) or hybrid (hybrid 1/2A, hybrid 2A/2X or hybrid 1/2X) (Supplementary Table ). For the features with only two isoforms, fibers were assigned as ‘slow’, ‘fast’ or ‘hybrid’. To determine the overlap of the contractile features assigning a fiber as being slow, upset plots were generated using the upset function of the ComplexUpset package (v 1.3.3), and then simplified to bar plots. Principal component analysis (PCA) PCA was performed using the RunPCA function of the Seurat package. Scree plots were based on the fviz_eig function of the factoextra package (v 1.0.7), which worked after PCA with prcomp . Seurat clustering Uniform Manifold Approximation and Projection (UMAP) clustering was performed based on the K-nearest neighbor graph with the first 6 dimensions as input for both the transcriptome and proteome datasets (Supplementary fig. ). Feature plots were generated using the FeaturePlot function. UMAP plots were colored based on different criteria (MYH-based fiber types, participant, test day) stored in the metadata. Enrichment analysis Genes and protein sets were processed to obtain lists of features that were differentially expressed or, in the case of the top PCA drivers, in the top 5% drivers for the positive and negative direction of the first and second principal component. Over-representation analysis was then performed on these features with the enrichGO and simplify function of the clusterProfiler package (v 4.6.2) using all gene ontology terms. Obtained lists of significant terms were manually curated to extract interesting and relevant terms. Hierarchical clustering of ribosomal proteins Raw proteomics data was log2 transformed and filtered to contain proteins enlisted in the ‘cytosolic ribosome’ GO term, followed by Z-scoring prior heatmap visualization using the pheatmap function from the Pheatmap package (v 1.0.12). The number of clusters was determined by visual inspection and assigning a value of 3 to the cuttree function. Differential expression analysis To avoid artificially inflated p -values, which would arise from regarding every fiber as an independent replicate, we employed a pseudobulk differential expression analysis approach. We mathematically downsampled the total data points to one value per MYH-based fiber type per participant by aggregating (transcriptomics) or taking the median value (proteomics). Transcriptomics data were further processed using the DESeq2 pipeline (v 1.38.3) with a ‘~ participant + fiber type’ statistical model. 1000 fiber proteomics data was processed using the limma workflow, fitting the data to a linear model defined as: ‘~ 0 + fiber type + participant‘, whereas the myopathy dataset was fitted to ‘~ 0 + condition’ for the comparisons between conditions and ‘~ 0 + fiber type and condition’ for the comparisons including fiber type. Fitted models were then subjected to gene ranking using an empirical Bayes method using eBayes prior extracting the results through topTable , with p -value adjustment set to Benjamini-Hochberg, both functions from the limma package. Threshold for significantly different genes/proteins was defined as adjusted p- values smaller than 0.05 and a log fold change cut-off of 1 was applied. For the nemaline myopathy dataset, the Xiao significance score was applied, which combines expression fold change and statistical significance . Proteins with a Xiao score under 0.05 were regarded as differentially expressed between conditions. SCENIC Inference of active transcription factors in slow and fast fibers was performed using Single-Cell rEgulatory Network Inference and Clustering (SCENIC, pySCENIC version 0.12.1 with cisTarget v10 databases and annotations) . To prioritize fiber type specific transcription factors, both their fiber type specific expression at mRNA level and regulon activity were combined into a final prioritization score. This prioritization score was calculated as the sum of the z-score scaled differential expression score (logFC from pseudobulked data) and z-score scaled regulon specificity scores (RSS). Non-coding RNA For the transcriptomics data, the biotype of each gene was determined using the ‘GENEBIOTYPE’ column using the AnnotationDbi package (v 1.60.2) with the EnsDb.Hsapiens.v86 database. Genomic location interrogation was performed using the UCSC Human Genome Browser ( https://genome.ucsc.edu ). Tissue-specific gene expression of interesting long non-coding RNAs was explored using the GTEx Portal database. Microprotein identification Construction of putative lncRNA-encoded protein database RNA sequences of the non-coding transcripts were extracted using the getSequence function from the biomaRt package (v 2.56.1), with ‘transcript_exon_intron’ as the seqType . Both intergenic (lincRNA) and antisense long non-coding RNA (lncRNA) transcripts were utilized for database construction. A six-frame-translation was used to translate the corresponding RNA sequences into the proteins, as well as ORFfinder NCBI functionality ( https://www.ncbi.nlm.nih.gov/orffinder/ ) was used to extract open reading frames (ORFs) from the transcripts. Minimal ORF length was set to 75 nucleotides, genetic code was set to “Standard” and “Any sense codon” was used, as a start codon, to extract maximum number of open reading frames. The obtained protein fasta file contained multiple entries for each gene name, ensured by various combinations of: i) transcript identifiers, ii) ORF identifiers and iii) start:stop codons. Identification of lncRNA-encoded proteins DIA raw MS data were analyzed with Spectronaut v18 using an in-house generated sample-specific fasta, comprised of the reviewed human proteome (proteome ID: UP000005640, 20 426 proteins, downloaded Sep 2023) and lncRNA-encoded protein sequences (125 223 proteins), in directDIA mode. The default settings were used unless otherwise noted. Data filtering was set to “Qvalue”. False discovery rate (FDR) was set to 1% at peptide precursor level and 1% at protein level. Top3 peptide precursors were used for protein quantification. The downstream data analysis was performed using in-house developed R scripts. PCA projection The 1000 fiber and myopathy data sets were initially filtered to remove non-overlapping proteins. Then, they were combined and normalized using the normalizeBetweenArrays function from the limma package. The normalization method used was “quantile” to ensure that both data sets had the same distributions and were comparable. Subsequently, the merged data set was divided back into the two separate data sets, namely the 1000 fiber data set and the myopathy data set. For the 1000 fiber data set, PCA was calculated using the prcomp function. Moving on, the myopathy data set was multiplied by the PC loadings obtained from the 1000 fiber dataset to generate its PCA projection. Finally, the PCA projections from the myopathy samples were plotted on the top of the 1000 fiber PCA visualization. Muscle-specific ribosomal gene signature The skeletal muscle-specific ribosomal gene signature, consisting of log2 fold change values comparing the mRNA expression of ribosomal subunits in skeletal muscle compared to 52 other human tissues, was downloaded from Panda et al . Log2 fold change values were ranked to identify ribosomal proteins with the highest overexpression in human skeletal muscle. Structural analysis of ribosomal proteins The human 80S ribosome structural model (Protein Data Bank: 4V6X) was downloaded from the Protein Data Bank website (RCSB PDB). Visualization and editing of the ribosomal structure, and preparation of figures and movies were performed in UCSF ChimeraX . Antibody-based fiber typing Dot blot was conducted following a previously described protocol with a few modifications . Initially, two identical PVDF membranes were activated using 96% ethanol and washed with transfer buffer. Subsequently, the membranes were placed on wet filter paper with transfer buffer until they dried. Next, 1 µL of fiber lysate was spotted at the same position on both membranes, and the membranes were allowed to dry. Reactivation of the membranes was carried out using 96% ethanol, followed by gentle washing with TBST. The membranes were then blocked in TBST containing 5% skim milk for 15 minutes. After blocking, the membranes were washed three times with TBST and incubated with the primary antibody solution of either anti-MYH7 (A4.840) or anti-MYH2 (A4.74), both from Developmental Studies Hybridoma Bank (DSHB) at a dilution of 1:200 in TBST containing 1% skim milk for one hour. Subsequently, the membranes were gently washed three times with TBST and incubated with the secondary antibody (anti-mouse) at a dilution of 1:20,000 in TBST containing 1% skim milk for two hours. Finally, the membranes were washed three times for five minutes each with TBST and visualized using Immobilon Forte (Milipore) in a ChemiDoc XRS+ (Bio-Rad) imaging system. RNAscope 8 µm sections from three different fixed-frozen human muscle biopsies were used for RNAscope labeling and subsequent immunohistochemistry (IHC). For detection of RP11-255P5.3 and LINC01405 commercially available RNAscope Multiplex Fluorescent Assay V2 (Advanced Cell Diagnostics) and probes against Lnc-ERCC5-5-C1 (# 1276851-C1), and LINC01405-C2 (# 549201-C2) (Advanced Cell Diagnostics), were used according to the manufacturer’s protocols. To control tissue quality Positive 3-plex probe (# 320881) and negative 3-plex probe (# 320871) were used. To visualize different subtypes of muscle fibers, sections after RNAscope were blocked with 5% donkey serum and incubated with antibodies against MYH2 (A4.74-s (1:5); DSHB) and MYH7 (A4.840 (1:5); DSHB) overnight. After washing sections were incubated with Alexa Flour 488-conjugated Donkey Anti-Mouse IgG, Fcγ Subclass 1 and DyLightTM405-conjugated Donkey Anti-Mouse IgM secondary antibodies respectively and mounted with ProLong™ Diamond Antifade Mountant (Invitrogen). Slides were imaged using Zeiss Axio Observer microscope equipped with Axiocam 702 camera. Biopsies from three individuals were used for quantification, with 187 muscle fibers being counted in total. As in RNAscope each dot corresponds to one RNA molecule, the number of dots/mm 2 was used as a measure of RNA expression. We first determined the number of dots/mm 2 within each fiber for both probes, then averaged the results by fiber type and participant. The given average was then used as an input for a two sample t-test. Immunostaining for muscle sections Immunolabelling was performed on 10 μm cryosections, fixed in 4% PFA (10 min), permeabilized in 0.1% Triton X-100 (20 min) and blocked in 10% Normal Goat Serum (50062Z, Life Technologies) with 0.1% BSA (1 h). Sections were incubated o/n (4 o C) with primary antibodies against MYH7 (mouse monoclonal A4.951, Santa Cruz, sc-53090, diluted 1:25) or MYH2 (mouse monoclonal SC71, DSHB, 1:25), each combined with primary antibody against TNNT1 (rabbit polyclonal HPA058448, Sigma, diluted 1:500) in 5% goat serum with 0.1% BSA and 0.1% Triton X-100. Alexa Fluor Goat anti-Mouse 647 (A21237) was used as the secondary antibody for the MYHs and Alexa Fluor Donkey anti-Rabbit 488 (A11034) for the TNNT1 (Life Technologies, 1:500 each in 10% Normal Goat Serum). Fluorescent images were obtained with a 10x objective on a Zeiss Axio Observer 3 fluorescence microscope with a Colibri 5 led detector, combined with Zeiss Axiocam 705 mono camera, using Zen software (Zeiss). For visualization purposes, a selection of fibers were mounted on copper grids glued on a microscopy slide, and imaged under a stereomicroscope. Reporting summary Further information on research design is available in the linked to this article. Participant information Fourteen participants (12 males / 2 females) of Caucasian origin volunteered to take part in this study, which was approved by the Ethical Committee of Ghent University Hospital (BC-10237), in agreement with the 2013 Declaration of Helsinki and registered on ClinicalTrials.gov (NCT05131555). General characteristics of the participants can be found in Supplementary Table . After oral and written informed consent, participants were medically screened before final inclusion. Participants were young (22-42 years old), healthy (no diseases and non-smoking) and moderately physically active. Maximal oxygen uptake was determined as marker of physical fitness during a graded incremental cycling test, as previously described . Muscle biopsy collection Muscle biopsies were collected in the rested and fasted state, on three different days, separated by 14 days. As these samples were collected as part of a larger study, on each of these days participants ingested a placebo (lactose), an H1-receptor antagonist (540 mg fexofenadine) or an H2-receptor antagonist (40 mg famotidine) 40 minutes before muscle biopsy collection. We have previously shown that these histamine receptor antagonists do not affect the resting skeletal muscle state , and no clustering was apparent based on condition in our quality control plots (Supplementary figs. & Supplementary fig. ). Dietary intake was standardized 48 hours before each experimental day (41.4 kcal/kg body weight, 5.1 g/kg body weight carbohydrates, 1.4 g/kg body weight protein and 1.6 g/kg body weight fat per day), followed by a standardized breakfast on the morning of the experimental day (1.5 g/kg bodyweight carbohydrates). Muscle biopsies of the m. vastus lateralis were then collected after local anesthesia (0.5 mL of 1% Xylocaïne without epinephrine) using the percutaneous Bergström technique with suction . The muscle samples were immediately submerged in RNA later and stored at 4 °C until manual fiber dissection (max. 3 days). Single fiber isolation Freshly excised muscle fiber bundles were transferred to fresh RNA later in a petri dish. Individual muscle fibers were then manually dissected using a stereomicroscope and fine forceps. Twenty-five fibers were dissected per biopsy, with special care to select fibers from different sections of the biopsy. After dissection, each fiber was carefully submerged in 3 µL of lysis buffer (SingleShot Cell Lysis kit, Bio-rad), containing proteinase K and DNase enzymes to remove unwanted proteins and DNA. Next, cell lysis and protein/DNA removal was initiated by short vortexing, spinning liquid down in a microcentrifuge and incubation at room temperature (10 min). Lysates were then incubated for 5 min at 37 °C and 5 min at 75 °C in a thermocycler (T100, Bio-Rad), immediately followed by storage at −80 °C until further processing. Sequencing library preparation Illumina-compatible libraries from polyadenylated RNA were prepared from 2 µL of the muscle fiber lysates using the QuantSeq-Pool 3’ mRNA-Seq library prep kit (Lexogen). Detailed methodology can be found in the manufacturer guidelines. The process was initiated by reverse transcription for first strand cDNA synthesis, during which Unique Molecular Identifiers (UMIs) and sample-specific i1 barcodes were introduced enabling sample pooling and reducing technical variability in the downstream process. Next, cDNA from 96 fibers was pooled and purified using magnetic beads, followed by RNA removal and second strand synthesis by random priming. Libraries were purified using magnetic beads, followed by addition of pool-specific i5/i7 indices and PCR amplification. A last purification step was performed, finalizing the Illumina-compatible libraries. A high sensitivity small DNA Fragment Analysis kit (Agilent Technologies, DNF-477-0500) was used to assess the quality of each library pool. Illumina sequencing The individual pools were further equimolarly (2 nM) pooled, based on Qubit-quantified concentrations. The final pool was subsequently sequenced with a NovaSeq S2 kit (1 × 100 nucleotides) with a loading of 2 nM (4% PhiX) in standard mode on a NovaSeq 6000 instrument. Primary data processing Our pipeline was based on the QuantSeq Pool data analysis pipeline from Lexogen ( https://github.com/Lexogen-Tools/quantseqpool_analysis ). First, the data was demultiplexed based on the i7/i5 indices with bcl2fastq2 (v2.20.0). The next demultiplexing step was performed via idemux (v0.1.6) according to the i1 sample-specific barcodes in read 2, followed by extraction of UMI sequences with umi_tools (v1.0.1). Trimming of the reads was then performed in multiple rounds with cutadapt (v3.4), with removal of too short reads (length <20) or reads consisting entirely of adapter sequences. Reads were then aligned to the human genome with STAR (v2.6.0c), followed by BAM file indexing with SAMtools (v1.11). Duplicates of reads were removed with umi_tools (v1.0.1). Finally, counting of the alignments was performed with featureCounts from Subread (v2.0.3). At several intermediate steps during the pipeline, quality control with performed with FastQC (v0.11.9). Initial Seurat processing All further bioinformatics processing and visualization was performed in R (v4.2.3), primarily with the Seurat (v 4.4.0) workflow . The individual UMI counts and metadata matrices were thus transformed into a Seurat object. Genes with expression in less than 30% of all fibers were removed. Low-quality samples were then removed based on a minimum threshold of 1000 UMI counts and 1000 detected genes. This resulted in a total of 925 fibers that passed all quality control filtering steps. Normalization of UMI counts was performed using the SCTransform v2 Seurat method , including all 7418 detected features and regressing out participant variation. All relevant metadata can be found in Supplementary Dataset . Fourteen participants (12 males / 2 females) of Caucasian origin volunteered to take part in this study, which was approved by the Ethical Committee of Ghent University Hospital (BC-10237), in agreement with the 2013 Declaration of Helsinki and registered on ClinicalTrials.gov (NCT05131555). General characteristics of the participants can be found in Supplementary Table . After oral and written informed consent, participants were medically screened before final inclusion. Participants were young (22-42 years old), healthy (no diseases and non-smoking) and moderately physically active. Maximal oxygen uptake was determined as marker of physical fitness during a graded incremental cycling test, as previously described . Muscle biopsies were collected in the rested and fasted state, on three different days, separated by 14 days. As these samples were collected as part of a larger study, on each of these days participants ingested a placebo (lactose), an H1-receptor antagonist (540 mg fexofenadine) or an H2-receptor antagonist (40 mg famotidine) 40 minutes before muscle biopsy collection. We have previously shown that these histamine receptor antagonists do not affect the resting skeletal muscle state , and no clustering was apparent based on condition in our quality control plots (Supplementary figs. & Supplementary fig. ). Dietary intake was standardized 48 hours before each experimental day (41.4 kcal/kg body weight, 5.1 g/kg body weight carbohydrates, 1.4 g/kg body weight protein and 1.6 g/kg body weight fat per day), followed by a standardized breakfast on the morning of the experimental day (1.5 g/kg bodyweight carbohydrates). Muscle biopsies of the m. vastus lateralis were then collected after local anesthesia (0.5 mL of 1% Xylocaïne without epinephrine) using the percutaneous Bergström technique with suction . The muscle samples were immediately submerged in RNA later and stored at 4 °C until manual fiber dissection (max. 3 days). Freshly excised muscle fiber bundles were transferred to fresh RNA later in a petri dish. Individual muscle fibers were then manually dissected using a stereomicroscope and fine forceps. Twenty-five fibers were dissected per biopsy, with special care to select fibers from different sections of the biopsy. After dissection, each fiber was carefully submerged in 3 µL of lysis buffer (SingleShot Cell Lysis kit, Bio-rad), containing proteinase K and DNase enzymes to remove unwanted proteins and DNA. Next, cell lysis and protein/DNA removal was initiated by short vortexing, spinning liquid down in a microcentrifuge and incubation at room temperature (10 min). Lysates were then incubated for 5 min at 37 °C and 5 min at 75 °C in a thermocycler (T100, Bio-Rad), immediately followed by storage at −80 °C until further processing. Illumina-compatible libraries from polyadenylated RNA were prepared from 2 µL of the muscle fiber lysates using the QuantSeq-Pool 3’ mRNA-Seq library prep kit (Lexogen). Detailed methodology can be found in the manufacturer guidelines. The process was initiated by reverse transcription for first strand cDNA synthesis, during which Unique Molecular Identifiers (UMIs) and sample-specific i1 barcodes were introduced enabling sample pooling and reducing technical variability in the downstream process. Next, cDNA from 96 fibers was pooled and purified using magnetic beads, followed by RNA removal and second strand synthesis by random priming. Libraries were purified using magnetic beads, followed by addition of pool-specific i5/i7 indices and PCR amplification. A last purification step was performed, finalizing the Illumina-compatible libraries. A high sensitivity small DNA Fragment Analysis kit (Agilent Technologies, DNF-477-0500) was used to assess the quality of each library pool. The individual pools were further equimolarly (2 nM) pooled, based on Qubit-quantified concentrations. The final pool was subsequently sequenced with a NovaSeq S2 kit (1 × 100 nucleotides) with a loading of 2 nM (4% PhiX) in standard mode on a NovaSeq 6000 instrument. Our pipeline was based on the QuantSeq Pool data analysis pipeline from Lexogen ( https://github.com/Lexogen-Tools/quantseqpool_analysis ). First, the data was demultiplexed based on the i7/i5 indices with bcl2fastq2 (v2.20.0). The next demultiplexing step was performed via idemux (v0.1.6) according to the i1 sample-specific barcodes in read 2, followed by extraction of UMI sequences with umi_tools (v1.0.1). Trimming of the reads was then performed in multiple rounds with cutadapt (v3.4), with removal of too short reads (length <20) or reads consisting entirely of adapter sequences. Reads were then aligned to the human genome with STAR (v2.6.0c), followed by BAM file indexing with SAMtools (v1.11). Duplicates of reads were removed with umi_tools (v1.0.1). Finally, counting of the alignments was performed with featureCounts from Subread (v2.0.3). At several intermediate steps during the pipeline, quality control with performed with FastQC (v0.11.9). All further bioinformatics processing and visualization was performed in R (v4.2.3), primarily with the Seurat (v 4.4.0) workflow . The individual UMI counts and metadata matrices were thus transformed into a Seurat object. Genes with expression in less than 30% of all fibers were removed. Low-quality samples were then removed based on a minimum threshold of 1000 UMI counts and 1000 detected genes. This resulted in a total of 925 fibers that passed all quality control filtering steps. Normalization of UMI counts was performed using the SCTransform v2 Seurat method , including all 7418 detected features and regressing out participant variation. All relevant metadata can be found in Supplementary Dataset . Sample collection - 1000 fiber proteome Participant information Stored biobank muscle specimens were used for the purpose of the present study (Clinicaltrials.gov identifier: NCT04048993). The specimens were collected from five active and healthy male volunteers (aged 21–35 years) of Caucasian ancestry who gave their written and oral informed consent with approval from the Science Ethics Committee of the Capital Region in Denmark (H-1-2012-090) and complied with the guidelines of the 2013 Declaration of Helsinki. General characteristics of the participants can be found in Supplementary Table . Participants were young, healthy (no diseases and non-smoking) and moderately physically active. Muscle biopsy collection Participants arrived in the morning after an overnight fast and rested in the supine position for 1 hour. Then, local anesthesia (2-3 mL Xylocaine 2%; lidocaine without epinephrine, AstraZeneca, Denmark) was applied under the skin above the fascia at the belly of the m. vastus lateralis muscle. A muscle biopsy was sampled through a small 3–4 mm incision using a Bergström needle with suction. The muscle biopsy specimen was snap-frozen in liquid nitrogen and stored at −80 °C until analysis. Single muscle fiber isolation Muscle fibers were isolated from freeze-dried specimens as previously described . In brief, muscle biopsies were freeze-dried for 48 hours. Subsequently, fibers were isolated in a humidity- and temperature-controlled room (humidity of 25%) using fine forceps under a stereomicroscope. ~200 single muscle fibers were isolated for each biopsy, resulting in a total of 1038. To ensure the fibers settled at the bottom of the tube, each fiber-containing tube underwent centrifugation at 20,000 g using a small centrifuge. Next, fibers were resuspended in 15 µL of lysis buffer (1% sodium dodecyl sulfate (SDC), 40 mM chloroacetamide (CAA), 10 mM dithiothreitol (DTT) in 50 mM Tris pH 8.5). Sample collection – myopathy Participant information Six patients with severe nemaline myopathy were selected from our nemaline myopathy study cohort. Three patients (2 male and 1 female) had pathogenic variants in ACTA1 , representing the conventional severe form, and three patients had pathogenic variants in TNNT1 (3 male), resulting in a rare, progressive form of nemaline myopathy. Three healthy individuals with no history of neuromuscular disease were used as controls. All participants are of Caucasian ancestry (Supplementary Table ). Muscle biopsy collection Healthy control participant biopsies ( n = 3 males) were used from an original study , and were therefore collected, snap frozen in liquid nitrogen and stored in -80°C as per that originally described. For the present study, a fragment of this stored biopsy was dissected under sterile, frozen conditions before being prepared for single myofiber isolation (detailed below). Acquisition of biopsies of healthy control patients was approved by local ethics committee (Copenhagen and Frederiksberg) in Denmark (hs:h-15002266). Those of myopathy patients were consented, stored, and used in accordance with the Human Tissue Act under local ethical approval in United Kingdom (REC 13/NE/0373). All procedures were carried out in accordance to the Declaration of Helsinki. Single fiber isolation Dissected fragments of muscle biopsy were placed in ice-cold, 22 micron filtered relaxing solution (4 mM Mg-ATP, 1 mM free Mg 2+ , 10 -6.00 mM free Ca 2+ , 20 mM imidazole, 7 mM EGTA, 14.5 mM creatine phosphate, KCl to an iconic strength of 180 mM and pH to 7.0) for ~3 minutes before being immersed in fresh relaxing solution on a sterile petri dish and mounted on ice under a dissection microscope for single fiber isolation. Fibers were cleaved from the tissue/biopsy ensuring a variety of sample/biopsy locations were used and that only single fibers were selected. Following isolation, fibers were manually moved to a sterile 96-well plate containing 15 µL of lysis buffer (identical to that detailed above) where the tweezers containing the fiber were submerged, spun and agitated in lysis buffer to ensure the fibers dissociated from the tweezers. To ensure fibers were settled to the bottom of the well, the 96-well plate was subjected to gentle vortexing and centrifugation (1,000 g). Proteomics analysis Sample preparation Samples from both proteomics studies followed the same sample preparation workflow. In order to extract the proteins, samples were boiled at 95 °C in a thermomixer with gentle shaking (800 rpm) and sonicated in a bioruptor instrument with 30 seconds on/off cycles for 15 minutes. A small 5 µL fraction of lysate from each sample was saved for antibody-based fiber typing of the 1000 fiber samples. Next, samples were processed following a modified version of the in-solution digestion sample preparation protocol. In brief, total volume was adjusted to 50 µl by addition of digestion buffer, containing 50 mM Tris PH 8.5 buffer, an enzyme to protein ratio of 1:500 LysC (Wako) and a 1:100 enzyme to protein ratio of trypsin (Promega). Single muscle fiber lysates were digested overnight in a thermomixer set to 37° C and 800 rpm. The next day, protein digestion was quenched by addition of 50 µl of 2% trifluoroacetic acid (TFA) in isopropanol. Peptides were desalted using in-house prepared single-use reverse-phase StageTips containing styrenedivinylbenzene reverse-phase sulfonate (SDB-RPS) disks. Then, desalted peptides were loaded in Evotips (Evosep) following manufacturer instructions prior LC-MS/MS analysis. Bulk tissue samples were prepared using the same protocol utilized for single fibers, with a few modifications to sample lysis. Tissue samples were first powdered using a tissue crusher over dry ice before resuspending the powder in the same lysis buffer described above. Then, the samples were homogenized using an IKA Turrax homogenizer for 2 minutes prior boiling and sonication. From there onwards the samples underwent the same protocol described above. Proteomics library preparation Fibers from the five healthy control individuals participating in the 1000 fiber study were carefully dissected and combined in order to create a pooled fiber lysate. Then, 200 µg of protein corresponding to each participant-specific lysate were pooled together into one final protein lysate that was processed following the same sample preparation workflow just described. 20 µg of desalted peptides were fractionated using High PH Reverse-Phase Chromatography (HpH-RP). Fractionation was carried out on a Kinetex 2.6 µm EVO C18 100 Å, 150 ×0.3 mm column manufactured by Phenomenex and using an EASY-nLC 1200 System (Thermo) operating at 1.5 µL/min. Separation was accomplished using a 62 min step-gradient starting from 3% to 60% solvent B (which consisted of 10 mM TEAB in 80% acetonitrile) and solvent A (containing 10 mM TEAB in water). The total run time was 98 min, which included wash and column equilibration. Throughout the fractionation, peptides were eluted and collected every 60 s, obtaining 96 single fractions without concatenation. Finally. 200 ng of HpH-RP fractionated peptides were loaded, concentrated and desalted on Evotips (Evosep) following the instructions provided by the manufacturer. Liquid chromatography tandem mass spectrometry Proteomics measurements were performed using LC-MS instrumentation consisting of an Evosep One HPLC system (Evosep) coupled via electrospray ionization to a timsTOF SCP mass spectrometer (Bruker). Peptides were separated utilizing 8 cm, 150 μM ID columns packed with C18 beads (1.5 μm) (Evosep). Chromatographic separation was achieved by the ‘60 samples per day’ method, followed by electrospray ionization through a CaptiveSpray ion source and a 10 μm emitter into the MS instrument. Single muscle fiber peptides were measured in DIA-PASEF mode following a previously described method , while library fractions were measured using DDA-PASEF. In brief, the DDA-PASEF scan range encompassed 100–1700 m/z for both MS and MS/MS, and TIMS mobility range was set to 0.6–1.6 (V cm −2 ). Both TIMS ramp and accumulation times were configured to 100 ms, and 10 PASEF ramps were recorded for a total cycle time of 1.17 s. The MS/MS target intensity and intensity threshold were defined as 20.000 and 1.000, respectively. An exclusion list of 0.4 min for precursors within 0.015 m/z and 0.015 V cm −2 width was also activated. For DIA-PASEF the scan range was established at 400-1000 (m/z), the TIMS mobility range to 0.64-1.37 (V cm −2 ), and ramp and accumulation times were both set to 100 ms. A short-gradient method was used, which included 8 DIA-PASEF scans with three 25 Da windows per ramp, resulting in an estimated cycle time of 0.95 sec. MS data processing Library files were processed using the MS-Fragger functionality within Fragpipe v19.0 under the SpecLib workflow with default settings, including a minimum peptide length of seven amino acids and maximum of two missed cleavages allowed . Spectra were searched against a human reviewed FASTA from Uniprot (March 2022, 20410 entries) and the output library contained a total of 5350 protein groups and 84383 precursors. Sample raw MS files were analyzed using DIA-NN version 1.8 in a library-based manner against the MS library just described. Protein groups quantification was based on proteotypic peptides, neural network was set to double-pass mode, quantification strategy was set to “Robust LC (high accuracy)” and the match between runs options was enabled, the rest of parameters remained as default, which included precursor FDR set to 1% and peptide length of 7-30 amino acids. Data processing Further data analysis was performed under the R environment (R version 4.22). Both a metadata data frame containing sample and participant information and the “PG_matrix.tsv” file from DIA-NN’s output were loaded in RStudio. The 1000 fiber data frame was filtered to remove samples with less than 50% valid protein intensity values, resulting in a total of 974 fibers. Next, rows were filtered to remove proteins with less than 30% valid values across samples, resulting in a total of 1685 proteins. Regarding the myopathy dataset, after filtering samples for 50% valid values, the number of samples was 250. We included in the analysis proteins that were quantified in 70% of the samples in at least one condition (conditions: control, actin myopathy and troponin myopathy), resulting in a total of 1545 proteins. Both data frames were then log2 transformed and normalized using the normalizeBetweenArrays function from the limma package (v 3.54.2), with the method argument set to quantile . Then, batch correction through the ComBat function from the sva package (v3.50.0) was applied to minimize the effect of the three technical batches originated during mass spectrometry measurement. Finally, missing values were replaced by random numbers from a Gaussian distribution with the default settings of the tImpute function from the PhosR package (v 1.12.0) . All relevant metadata for the 1000 fiber proteome and nemaline myopathy datasets can be found in Supplementary Data & , respectively. Participant information Stored biobank muscle specimens were used for the purpose of the present study (Clinicaltrials.gov identifier: NCT04048993). The specimens were collected from five active and healthy male volunteers (aged 21–35 years) of Caucasian ancestry who gave their written and oral informed consent with approval from the Science Ethics Committee of the Capital Region in Denmark (H-1-2012-090) and complied with the guidelines of the 2013 Declaration of Helsinki. General characteristics of the participants can be found in Supplementary Table . Participants were young, healthy (no diseases and non-smoking) and moderately physically active. Muscle biopsy collection Participants arrived in the morning after an overnight fast and rested in the supine position for 1 hour. Then, local anesthesia (2-3 mL Xylocaine 2%; lidocaine without epinephrine, AstraZeneca, Denmark) was applied under the skin above the fascia at the belly of the m. vastus lateralis muscle. A muscle biopsy was sampled through a small 3–4 mm incision using a Bergström needle with suction. The muscle biopsy specimen was snap-frozen in liquid nitrogen and stored at −80 °C until analysis. Single muscle fiber isolation Muscle fibers were isolated from freeze-dried specimens as previously described . In brief, muscle biopsies were freeze-dried for 48 hours. Subsequently, fibers were isolated in a humidity- and temperature-controlled room (humidity of 25%) using fine forceps under a stereomicroscope. ~200 single muscle fibers were isolated for each biopsy, resulting in a total of 1038. To ensure the fibers settled at the bottom of the tube, each fiber-containing tube underwent centrifugation at 20,000 g using a small centrifuge. Next, fibers were resuspended in 15 µL of lysis buffer (1% sodium dodecyl sulfate (SDC), 40 mM chloroacetamide (CAA), 10 mM dithiothreitol (DTT) in 50 mM Tris pH 8.5). Stored biobank muscle specimens were used for the purpose of the present study (Clinicaltrials.gov identifier: NCT04048993). The specimens were collected from five active and healthy male volunteers (aged 21–35 years) of Caucasian ancestry who gave their written and oral informed consent with approval from the Science Ethics Committee of the Capital Region in Denmark (H-1-2012-090) and complied with the guidelines of the 2013 Declaration of Helsinki. General characteristics of the participants can be found in Supplementary Table . Participants were young, healthy (no diseases and non-smoking) and moderately physically active. Participants arrived in the morning after an overnight fast and rested in the supine position for 1 hour. Then, local anesthesia (2-3 mL Xylocaine 2%; lidocaine without epinephrine, AstraZeneca, Denmark) was applied under the skin above the fascia at the belly of the m. vastus lateralis muscle. A muscle biopsy was sampled through a small 3–4 mm incision using a Bergström needle with suction. The muscle biopsy specimen was snap-frozen in liquid nitrogen and stored at −80 °C until analysis. Muscle fibers were isolated from freeze-dried specimens as previously described . In brief, muscle biopsies were freeze-dried for 48 hours. Subsequently, fibers were isolated in a humidity- and temperature-controlled room (humidity of 25%) using fine forceps under a stereomicroscope. ~200 single muscle fibers were isolated for each biopsy, resulting in a total of 1038. To ensure the fibers settled at the bottom of the tube, each fiber-containing tube underwent centrifugation at 20,000 g using a small centrifuge. Next, fibers were resuspended in 15 µL of lysis buffer (1% sodium dodecyl sulfate (SDC), 40 mM chloroacetamide (CAA), 10 mM dithiothreitol (DTT) in 50 mM Tris pH 8.5). Participant information Six patients with severe nemaline myopathy were selected from our nemaline myopathy study cohort. Three patients (2 male and 1 female) had pathogenic variants in ACTA1 , representing the conventional severe form, and three patients had pathogenic variants in TNNT1 (3 male), resulting in a rare, progressive form of nemaline myopathy. Three healthy individuals with no history of neuromuscular disease were used as controls. All participants are of Caucasian ancestry (Supplementary Table ). Muscle biopsy collection Healthy control participant biopsies ( n = 3 males) were used from an original study , and were therefore collected, snap frozen in liquid nitrogen and stored in -80°C as per that originally described. For the present study, a fragment of this stored biopsy was dissected under sterile, frozen conditions before being prepared for single myofiber isolation (detailed below). Acquisition of biopsies of healthy control patients was approved by local ethics committee (Copenhagen and Frederiksberg) in Denmark (hs:h-15002266). Those of myopathy patients were consented, stored, and used in accordance with the Human Tissue Act under local ethical approval in United Kingdom (REC 13/NE/0373). All procedures were carried out in accordance to the Declaration of Helsinki. Single fiber isolation Dissected fragments of muscle biopsy were placed in ice-cold, 22 micron filtered relaxing solution (4 mM Mg-ATP, 1 mM free Mg 2+ , 10 -6.00 mM free Ca 2+ , 20 mM imidazole, 7 mM EGTA, 14.5 mM creatine phosphate, KCl to an iconic strength of 180 mM and pH to 7.0) for ~3 minutes before being immersed in fresh relaxing solution on a sterile petri dish and mounted on ice under a dissection microscope for single fiber isolation. Fibers were cleaved from the tissue/biopsy ensuring a variety of sample/biopsy locations were used and that only single fibers were selected. Following isolation, fibers were manually moved to a sterile 96-well plate containing 15 µL of lysis buffer (identical to that detailed above) where the tweezers containing the fiber were submerged, spun and agitated in lysis buffer to ensure the fibers dissociated from the tweezers. To ensure fibers were settled to the bottom of the well, the 96-well plate was subjected to gentle vortexing and centrifugation (1,000 g). Six patients with severe nemaline myopathy were selected from our nemaline myopathy study cohort. Three patients (2 male and 1 female) had pathogenic variants in ACTA1 , representing the conventional severe form, and three patients had pathogenic variants in TNNT1 (3 male), resulting in a rare, progressive form of nemaline myopathy. Three healthy individuals with no history of neuromuscular disease were used as controls. All participants are of Caucasian ancestry (Supplementary Table ). Healthy control participant biopsies ( n = 3 males) were used from an original study , and were therefore collected, snap frozen in liquid nitrogen and stored in -80°C as per that originally described. For the present study, a fragment of this stored biopsy was dissected under sterile, frozen conditions before being prepared for single myofiber isolation (detailed below). Acquisition of biopsies of healthy control patients was approved by local ethics committee (Copenhagen and Frederiksberg) in Denmark (hs:h-15002266). Those of myopathy patients were consented, stored, and used in accordance with the Human Tissue Act under local ethical approval in United Kingdom (REC 13/NE/0373). All procedures were carried out in accordance to the Declaration of Helsinki. Dissected fragments of muscle biopsy were placed in ice-cold, 22 micron filtered relaxing solution (4 mM Mg-ATP, 1 mM free Mg 2+ , 10 -6.00 mM free Ca 2+ , 20 mM imidazole, 7 mM EGTA, 14.5 mM creatine phosphate, KCl to an iconic strength of 180 mM and pH to 7.0) for ~3 minutes before being immersed in fresh relaxing solution on a sterile petri dish and mounted on ice under a dissection microscope for single fiber isolation. Fibers were cleaved from the tissue/biopsy ensuring a variety of sample/biopsy locations were used and that only single fibers were selected. Following isolation, fibers were manually moved to a sterile 96-well plate containing 15 µL of lysis buffer (identical to that detailed above) where the tweezers containing the fiber were submerged, spun and agitated in lysis buffer to ensure the fibers dissociated from the tweezers. To ensure fibers were settled to the bottom of the well, the 96-well plate was subjected to gentle vortexing and centrifugation (1,000 g). Sample preparation Samples from both proteomics studies followed the same sample preparation workflow. In order to extract the proteins, samples were boiled at 95 °C in a thermomixer with gentle shaking (800 rpm) and sonicated in a bioruptor instrument with 30 seconds on/off cycles for 15 minutes. A small 5 µL fraction of lysate from each sample was saved for antibody-based fiber typing of the 1000 fiber samples. Next, samples were processed following a modified version of the in-solution digestion sample preparation protocol. In brief, total volume was adjusted to 50 µl by addition of digestion buffer, containing 50 mM Tris PH 8.5 buffer, an enzyme to protein ratio of 1:500 LysC (Wako) and a 1:100 enzyme to protein ratio of trypsin (Promega). Single muscle fiber lysates were digested overnight in a thermomixer set to 37° C and 800 rpm. The next day, protein digestion was quenched by addition of 50 µl of 2% trifluoroacetic acid (TFA) in isopropanol. Peptides were desalted using in-house prepared single-use reverse-phase StageTips containing styrenedivinylbenzene reverse-phase sulfonate (SDB-RPS) disks. Then, desalted peptides were loaded in Evotips (Evosep) following manufacturer instructions prior LC-MS/MS analysis. Bulk tissue samples were prepared using the same protocol utilized for single fibers, with a few modifications to sample lysis. Tissue samples were first powdered using a tissue crusher over dry ice before resuspending the powder in the same lysis buffer described above. Then, the samples were homogenized using an IKA Turrax homogenizer for 2 minutes prior boiling and sonication. From there onwards the samples underwent the same protocol described above. Proteomics library preparation Fibers from the five healthy control individuals participating in the 1000 fiber study were carefully dissected and combined in order to create a pooled fiber lysate. Then, 200 µg of protein corresponding to each participant-specific lysate were pooled together into one final protein lysate that was processed following the same sample preparation workflow just described. 20 µg of desalted peptides were fractionated using High PH Reverse-Phase Chromatography (HpH-RP). Fractionation was carried out on a Kinetex 2.6 µm EVO C18 100 Å, 150 ×0.3 mm column manufactured by Phenomenex and using an EASY-nLC 1200 System (Thermo) operating at 1.5 µL/min. Separation was accomplished using a 62 min step-gradient starting from 3% to 60% solvent B (which consisted of 10 mM TEAB in 80% acetonitrile) and solvent A (containing 10 mM TEAB in water). The total run time was 98 min, which included wash and column equilibration. Throughout the fractionation, peptides were eluted and collected every 60 s, obtaining 96 single fractions without concatenation. Finally. 200 ng of HpH-RP fractionated peptides were loaded, concentrated and desalted on Evotips (Evosep) following the instructions provided by the manufacturer. Liquid chromatography tandem mass spectrometry Proteomics measurements were performed using LC-MS instrumentation consisting of an Evosep One HPLC system (Evosep) coupled via electrospray ionization to a timsTOF SCP mass spectrometer (Bruker). Peptides were separated utilizing 8 cm, 150 μM ID columns packed with C18 beads (1.5 μm) (Evosep). Chromatographic separation was achieved by the ‘60 samples per day’ method, followed by electrospray ionization through a CaptiveSpray ion source and a 10 μm emitter into the MS instrument. Single muscle fiber peptides were measured in DIA-PASEF mode following a previously described method , while library fractions were measured using DDA-PASEF. In brief, the DDA-PASEF scan range encompassed 100–1700 m/z for both MS and MS/MS, and TIMS mobility range was set to 0.6–1.6 (V cm −2 ). Both TIMS ramp and accumulation times were configured to 100 ms, and 10 PASEF ramps were recorded for a total cycle time of 1.17 s. The MS/MS target intensity and intensity threshold were defined as 20.000 and 1.000, respectively. An exclusion list of 0.4 min for precursors within 0.015 m/z and 0.015 V cm −2 width was also activated. For DIA-PASEF the scan range was established at 400-1000 (m/z), the TIMS mobility range to 0.64-1.37 (V cm −2 ), and ramp and accumulation times were both set to 100 ms. A short-gradient method was used, which included 8 DIA-PASEF scans with three 25 Da windows per ramp, resulting in an estimated cycle time of 0.95 sec. MS data processing Library files were processed using the MS-Fragger functionality within Fragpipe v19.0 under the SpecLib workflow with default settings, including a minimum peptide length of seven amino acids and maximum of two missed cleavages allowed . Spectra were searched against a human reviewed FASTA from Uniprot (March 2022, 20410 entries) and the output library contained a total of 5350 protein groups and 84383 precursors. Sample raw MS files were analyzed using DIA-NN version 1.8 in a library-based manner against the MS library just described. Protein groups quantification was based on proteotypic peptides, neural network was set to double-pass mode, quantification strategy was set to “Robust LC (high accuracy)” and the match between runs options was enabled, the rest of parameters remained as default, which included precursor FDR set to 1% and peptide length of 7-30 amino acids. Data processing Further data analysis was performed under the R environment (R version 4.22). Both a metadata data frame containing sample and participant information and the “PG_matrix.tsv” file from DIA-NN’s output were loaded in RStudio. The 1000 fiber data frame was filtered to remove samples with less than 50% valid protein intensity values, resulting in a total of 974 fibers. Next, rows were filtered to remove proteins with less than 30% valid values across samples, resulting in a total of 1685 proteins. Regarding the myopathy dataset, after filtering samples for 50% valid values, the number of samples was 250. We included in the analysis proteins that were quantified in 70% of the samples in at least one condition (conditions: control, actin myopathy and troponin myopathy), resulting in a total of 1545 proteins. Both data frames were then log2 transformed and normalized using the normalizeBetweenArrays function from the limma package (v 3.54.2), with the method argument set to quantile . Then, batch correction through the ComBat function from the sva package (v3.50.0) was applied to minimize the effect of the three technical batches originated during mass spectrometry measurement. Finally, missing values were replaced by random numbers from a Gaussian distribution with the default settings of the tImpute function from the PhosR package (v 1.12.0) . All relevant metadata for the 1000 fiber proteome and nemaline myopathy datasets can be found in Supplementary Data & , respectively. Samples from both proteomics studies followed the same sample preparation workflow. In order to extract the proteins, samples were boiled at 95 °C in a thermomixer with gentle shaking (800 rpm) and sonicated in a bioruptor instrument with 30 seconds on/off cycles for 15 minutes. A small 5 µL fraction of lysate from each sample was saved for antibody-based fiber typing of the 1000 fiber samples. Next, samples were processed following a modified version of the in-solution digestion sample preparation protocol. In brief, total volume was adjusted to 50 µl by addition of digestion buffer, containing 50 mM Tris PH 8.5 buffer, an enzyme to protein ratio of 1:500 LysC (Wako) and a 1:100 enzyme to protein ratio of trypsin (Promega). Single muscle fiber lysates were digested overnight in a thermomixer set to 37° C and 800 rpm. The next day, protein digestion was quenched by addition of 50 µl of 2% trifluoroacetic acid (TFA) in isopropanol. Peptides were desalted using in-house prepared single-use reverse-phase StageTips containing styrenedivinylbenzene reverse-phase sulfonate (SDB-RPS) disks. Then, desalted peptides were loaded in Evotips (Evosep) following manufacturer instructions prior LC-MS/MS analysis. Bulk tissue samples were prepared using the same protocol utilized for single fibers, with a few modifications to sample lysis. Tissue samples were first powdered using a tissue crusher over dry ice before resuspending the powder in the same lysis buffer described above. Then, the samples were homogenized using an IKA Turrax homogenizer for 2 minutes prior boiling and sonication. From there onwards the samples underwent the same protocol described above. Fibers from the five healthy control individuals participating in the 1000 fiber study were carefully dissected and combined in order to create a pooled fiber lysate. Then, 200 µg of protein corresponding to each participant-specific lysate were pooled together into one final protein lysate that was processed following the same sample preparation workflow just described. 20 µg of desalted peptides were fractionated using High PH Reverse-Phase Chromatography (HpH-RP). Fractionation was carried out on a Kinetex 2.6 µm EVO C18 100 Å, 150 ×0.3 mm column manufactured by Phenomenex and using an EASY-nLC 1200 System (Thermo) operating at 1.5 µL/min. Separation was accomplished using a 62 min step-gradient starting from 3% to 60% solvent B (which consisted of 10 mM TEAB in 80% acetonitrile) and solvent A (containing 10 mM TEAB in water). The total run time was 98 min, which included wash and column equilibration. Throughout the fractionation, peptides were eluted and collected every 60 s, obtaining 96 single fractions without concatenation. Finally. 200 ng of HpH-RP fractionated peptides were loaded, concentrated and desalted on Evotips (Evosep) following the instructions provided by the manufacturer. Proteomics measurements were performed using LC-MS instrumentation consisting of an Evosep One HPLC system (Evosep) coupled via electrospray ionization to a timsTOF SCP mass spectrometer (Bruker). Peptides were separated utilizing 8 cm, 150 μM ID columns packed with C18 beads (1.5 μm) (Evosep). Chromatographic separation was achieved by the ‘60 samples per day’ method, followed by electrospray ionization through a CaptiveSpray ion source and a 10 μm emitter into the MS instrument. Single muscle fiber peptides were measured in DIA-PASEF mode following a previously described method , while library fractions were measured using DDA-PASEF. In brief, the DDA-PASEF scan range encompassed 100–1700 m/z for both MS and MS/MS, and TIMS mobility range was set to 0.6–1.6 (V cm −2 ). Both TIMS ramp and accumulation times were configured to 100 ms, and 10 PASEF ramps were recorded for a total cycle time of 1.17 s. The MS/MS target intensity and intensity threshold were defined as 20.000 and 1.000, respectively. An exclusion list of 0.4 min for precursors within 0.015 m/z and 0.015 V cm −2 width was also activated. For DIA-PASEF the scan range was established at 400-1000 (m/z), the TIMS mobility range to 0.64-1.37 (V cm −2 ), and ramp and accumulation times were both set to 100 ms. A short-gradient method was used, which included 8 DIA-PASEF scans with three 25 Da windows per ramp, resulting in an estimated cycle time of 0.95 sec. Library files were processed using the MS-Fragger functionality within Fragpipe v19.0 under the SpecLib workflow with default settings, including a minimum peptide length of seven amino acids and maximum of two missed cleavages allowed . Spectra were searched against a human reviewed FASTA from Uniprot (March 2022, 20410 entries) and the output library contained a total of 5350 protein groups and 84383 precursors. Sample raw MS files were analyzed using DIA-NN version 1.8 in a library-based manner against the MS library just described. Protein groups quantification was based on proteotypic peptides, neural network was set to double-pass mode, quantification strategy was set to “Robust LC (high accuracy)” and the match between runs options was enabled, the rest of parameters remained as default, which included precursor FDR set to 1% and peptide length of 7-30 amino acids. Further data analysis was performed under the R environment (R version 4.22). Both a metadata data frame containing sample and participant information and the “PG_matrix.tsv” file from DIA-NN’s output were loaded in RStudio. The 1000 fiber data frame was filtered to remove samples with less than 50% valid protein intensity values, resulting in a total of 974 fibers. Next, rows were filtered to remove proteins with less than 30% valid values across samples, resulting in a total of 1685 proteins. Regarding the myopathy dataset, after filtering samples for 50% valid values, the number of samples was 250. We included in the analysis proteins that were quantified in 70% of the samples in at least one condition (conditions: control, actin myopathy and troponin myopathy), resulting in a total of 1545 proteins. Both data frames were then log2 transformed and normalized using the normalizeBetweenArrays function from the limma package (v 3.54.2), with the method argument set to quantile . Then, batch correction through the ComBat function from the sva package (v3.50.0) was applied to minimize the effect of the three technical batches originated during mass spectrometry measurement. Finally, missing values were replaced by random numbers from a Gaussian distribution with the default settings of the tImpute function from the PhosR package (v 1.12.0) . All relevant metadata for the 1000 fiber proteome and nemaline myopathy datasets can be found in Supplementary Data & , respectively. Transcriptome and proteome dynamic range The expression/intensity for each gene/protein was calculated relative to the total counts/intensity for each fiber. This value was then averaged across fibers in each dataset and log10-transformed. The overlap of detected features between both datasets were analyzed using the VennDiagram package (v 1.7.3). Coefficient of variation—proteomics For each of the 96 well plates used during the MS measurement of the 1000 fiber study, one technical control sample was included in A1 position to monitor total ion current intensity and quality control of the runs (a total of eleven technical controls). The coefficient of variation between proteins was calculated by dividing the standard deviation by the mean of the LFQ intensities from each protein across technical replicates and then multiplied by one hundred. Correlation analyzes Mean log2 transformed transcript counts, protein intensities, and/or fold change values across fibers were calculated, filtered for shared proteins/genes and Pearson correlation was calculated. Omics-based fiber typing Normalized counts and raw LFQ intensities were retrieved from well-described contractile proteins that have a slow (MYH7, TNNT1, TPM3, ATP2A2 and MYL3) and fast (MYH2, MYH1, TNNT3, TPM1, ATP2A1 and MYL1) isoforms . For each isoform combination, the relative expression of each was then calculated and samples were ordered from high to low. The mathematical bottom knee for each curve was then determined using the barcodeRanks function in the DropletUtils package (v 1.18.1). This threshold was used to assign fiber types as pure (type 1, type 2A or type 2X) or hybrid (hybrid 1/2A, hybrid 2A/2X or hybrid 1/2X) (Supplementary Table ). For the features with only two isoforms, fibers were assigned as ‘slow’, ‘fast’ or ‘hybrid’. To determine the overlap of the contractile features assigning a fiber as being slow, upset plots were generated using the upset function of the ComplexUpset package (v 1.3.3), and then simplified to bar plots. Principal component analysis (PCA) PCA was performed using the RunPCA function of the Seurat package. Scree plots were based on the fviz_eig function of the factoextra package (v 1.0.7), which worked after PCA with prcomp . Seurat clustering Uniform Manifold Approximation and Projection (UMAP) clustering was performed based on the K-nearest neighbor graph with the first 6 dimensions as input for both the transcriptome and proteome datasets (Supplementary fig. ). Feature plots were generated using the FeaturePlot function. UMAP plots were colored based on different criteria (MYH-based fiber types, participant, test day) stored in the metadata. Enrichment analysis Genes and protein sets were processed to obtain lists of features that were differentially expressed or, in the case of the top PCA drivers, in the top 5% drivers for the positive and negative direction of the first and second principal component. Over-representation analysis was then performed on these features with the enrichGO and simplify function of the clusterProfiler package (v 4.6.2) using all gene ontology terms. Obtained lists of significant terms were manually curated to extract interesting and relevant terms. Hierarchical clustering of ribosomal proteins Raw proteomics data was log2 transformed and filtered to contain proteins enlisted in the ‘cytosolic ribosome’ GO term, followed by Z-scoring prior heatmap visualization using the pheatmap function from the Pheatmap package (v 1.0.12). The number of clusters was determined by visual inspection and assigning a value of 3 to the cuttree function. Differential expression analysis To avoid artificially inflated p -values, which would arise from regarding every fiber as an independent replicate, we employed a pseudobulk differential expression analysis approach. We mathematically downsampled the total data points to one value per MYH-based fiber type per participant by aggregating (transcriptomics) or taking the median value (proteomics). Transcriptomics data were further processed using the DESeq2 pipeline (v 1.38.3) with a ‘~ participant + fiber type’ statistical model. 1000 fiber proteomics data was processed using the limma workflow, fitting the data to a linear model defined as: ‘~ 0 + fiber type + participant‘, whereas the myopathy dataset was fitted to ‘~ 0 + condition’ for the comparisons between conditions and ‘~ 0 + fiber type and condition’ for the comparisons including fiber type. Fitted models were then subjected to gene ranking using an empirical Bayes method using eBayes prior extracting the results through topTable , with p -value adjustment set to Benjamini-Hochberg, both functions from the limma package. Threshold for significantly different genes/proteins was defined as adjusted p- values smaller than 0.05 and a log fold change cut-off of 1 was applied. For the nemaline myopathy dataset, the Xiao significance score was applied, which combines expression fold change and statistical significance . Proteins with a Xiao score under 0.05 were regarded as differentially expressed between conditions. SCENIC Inference of active transcription factors in slow and fast fibers was performed using Single-Cell rEgulatory Network Inference and Clustering (SCENIC, pySCENIC version 0.12.1 with cisTarget v10 databases and annotations) . To prioritize fiber type specific transcription factors, both their fiber type specific expression at mRNA level and regulon activity were combined into a final prioritization score. This prioritization score was calculated as the sum of the z-score scaled differential expression score (logFC from pseudobulked data) and z-score scaled regulon specificity scores (RSS). Non-coding RNA For the transcriptomics data, the biotype of each gene was determined using the ‘GENEBIOTYPE’ column using the AnnotationDbi package (v 1.60.2) with the EnsDb.Hsapiens.v86 database. Genomic location interrogation was performed using the UCSC Human Genome Browser ( https://genome.ucsc.edu ). Tissue-specific gene expression of interesting long non-coding RNAs was explored using the GTEx Portal database. The expression/intensity for each gene/protein was calculated relative to the total counts/intensity for each fiber. This value was then averaged across fibers in each dataset and log10-transformed. The overlap of detected features between both datasets were analyzed using the VennDiagram package (v 1.7.3). For each of the 96 well plates used during the MS measurement of the 1000 fiber study, one technical control sample was included in A1 position to monitor total ion current intensity and quality control of the runs (a total of eleven technical controls). The coefficient of variation between proteins was calculated by dividing the standard deviation by the mean of the LFQ intensities from each protein across technical replicates and then multiplied by one hundred. Mean log2 transformed transcript counts, protein intensities, and/or fold change values across fibers were calculated, filtered for shared proteins/genes and Pearson correlation was calculated. Normalized counts and raw LFQ intensities were retrieved from well-described contractile proteins that have a slow (MYH7, TNNT1, TPM3, ATP2A2 and MYL3) and fast (MYH2, MYH1, TNNT3, TPM1, ATP2A1 and MYL1) isoforms . For each isoform combination, the relative expression of each was then calculated and samples were ordered from high to low. The mathematical bottom knee for each curve was then determined using the barcodeRanks function in the DropletUtils package (v 1.18.1). This threshold was used to assign fiber types as pure (type 1, type 2A or type 2X) or hybrid (hybrid 1/2A, hybrid 2A/2X or hybrid 1/2X) (Supplementary Table ). For the features with only two isoforms, fibers were assigned as ‘slow’, ‘fast’ or ‘hybrid’. To determine the overlap of the contractile features assigning a fiber as being slow, upset plots were generated using the upset function of the ComplexUpset package (v 1.3.3), and then simplified to bar plots. PCA was performed using the RunPCA function of the Seurat package. Scree plots were based on the fviz_eig function of the factoextra package (v 1.0.7), which worked after PCA with prcomp . Uniform Manifold Approximation and Projection (UMAP) clustering was performed based on the K-nearest neighbor graph with the first 6 dimensions as input for both the transcriptome and proteome datasets (Supplementary fig. ). Feature plots were generated using the FeaturePlot function. UMAP plots were colored based on different criteria (MYH-based fiber types, participant, test day) stored in the metadata. Genes and protein sets were processed to obtain lists of features that were differentially expressed or, in the case of the top PCA drivers, in the top 5% drivers for the positive and negative direction of the first and second principal component. Over-representation analysis was then performed on these features with the enrichGO and simplify function of the clusterProfiler package (v 4.6.2) using all gene ontology terms. Obtained lists of significant terms were manually curated to extract interesting and relevant terms. Raw proteomics data was log2 transformed and filtered to contain proteins enlisted in the ‘cytosolic ribosome’ GO term, followed by Z-scoring prior heatmap visualization using the pheatmap function from the Pheatmap package (v 1.0.12). The number of clusters was determined by visual inspection and assigning a value of 3 to the cuttree function. To avoid artificially inflated p -values, which would arise from regarding every fiber as an independent replicate, we employed a pseudobulk differential expression analysis approach. We mathematically downsampled the total data points to one value per MYH-based fiber type per participant by aggregating (transcriptomics) or taking the median value (proteomics). Transcriptomics data were further processed using the DESeq2 pipeline (v 1.38.3) with a ‘~ participant + fiber type’ statistical model. 1000 fiber proteomics data was processed using the limma workflow, fitting the data to a linear model defined as: ‘~ 0 + fiber type + participant‘, whereas the myopathy dataset was fitted to ‘~ 0 + condition’ for the comparisons between conditions and ‘~ 0 + fiber type and condition’ for the comparisons including fiber type. Fitted models were then subjected to gene ranking using an empirical Bayes method using eBayes prior extracting the results through topTable , with p -value adjustment set to Benjamini-Hochberg, both functions from the limma package. Threshold for significantly different genes/proteins was defined as adjusted p- values smaller than 0.05 and a log fold change cut-off of 1 was applied. For the nemaline myopathy dataset, the Xiao significance score was applied, which combines expression fold change and statistical significance . Proteins with a Xiao score under 0.05 were regarded as differentially expressed between conditions. Inference of active transcription factors in slow and fast fibers was performed using Single-Cell rEgulatory Network Inference and Clustering (SCENIC, pySCENIC version 0.12.1 with cisTarget v10 databases and annotations) . To prioritize fiber type specific transcription factors, both their fiber type specific expression at mRNA level and regulon activity were combined into a final prioritization score. This prioritization score was calculated as the sum of the z-score scaled differential expression score (logFC from pseudobulked data) and z-score scaled regulon specificity scores (RSS). For the transcriptomics data, the biotype of each gene was determined using the ‘GENEBIOTYPE’ column using the AnnotationDbi package (v 1.60.2) with the EnsDb.Hsapiens.v86 database. Genomic location interrogation was performed using the UCSC Human Genome Browser ( https://genome.ucsc.edu ). Tissue-specific gene expression of interesting long non-coding RNAs was explored using the GTEx Portal database. Construction of putative lncRNA-encoded protein database RNA sequences of the non-coding transcripts were extracted using the getSequence function from the biomaRt package (v 2.56.1), with ‘transcript_exon_intron’ as the seqType . Both intergenic (lincRNA) and antisense long non-coding RNA (lncRNA) transcripts were utilized for database construction. A six-frame-translation was used to translate the corresponding RNA sequences into the proteins, as well as ORFfinder NCBI functionality ( https://www.ncbi.nlm.nih.gov/orffinder/ ) was used to extract open reading frames (ORFs) from the transcripts. Minimal ORF length was set to 75 nucleotides, genetic code was set to “Standard” and “Any sense codon” was used, as a start codon, to extract maximum number of open reading frames. The obtained protein fasta file contained multiple entries for each gene name, ensured by various combinations of: i) transcript identifiers, ii) ORF identifiers and iii) start:stop codons. Identification of lncRNA-encoded proteins DIA raw MS data were analyzed with Spectronaut v18 using an in-house generated sample-specific fasta, comprised of the reviewed human proteome (proteome ID: UP000005640, 20 426 proteins, downloaded Sep 2023) and lncRNA-encoded protein sequences (125 223 proteins), in directDIA mode. The default settings were used unless otherwise noted. Data filtering was set to “Qvalue”. False discovery rate (FDR) was set to 1% at peptide precursor level and 1% at protein level. Top3 peptide precursors were used for protein quantification. The downstream data analysis was performed using in-house developed R scripts. PCA projection The 1000 fiber and myopathy data sets were initially filtered to remove non-overlapping proteins. Then, they were combined and normalized using the normalizeBetweenArrays function from the limma package. The normalization method used was “quantile” to ensure that both data sets had the same distributions and were comparable. Subsequently, the merged data set was divided back into the two separate data sets, namely the 1000 fiber data set and the myopathy data set. For the 1000 fiber data set, PCA was calculated using the prcomp function. Moving on, the myopathy data set was multiplied by the PC loadings obtained from the 1000 fiber dataset to generate its PCA projection. Finally, the PCA projections from the myopathy samples were plotted on the top of the 1000 fiber PCA visualization. RNA sequences of the non-coding transcripts were extracted using the getSequence function from the biomaRt package (v 2.56.1), with ‘transcript_exon_intron’ as the seqType . Both intergenic (lincRNA) and antisense long non-coding RNA (lncRNA) transcripts were utilized for database construction. A six-frame-translation was used to translate the corresponding RNA sequences into the proteins, as well as ORFfinder NCBI functionality ( https://www.ncbi.nlm.nih.gov/orffinder/ ) was used to extract open reading frames (ORFs) from the transcripts. Minimal ORF length was set to 75 nucleotides, genetic code was set to “Standard” and “Any sense codon” was used, as a start codon, to extract maximum number of open reading frames. The obtained protein fasta file contained multiple entries for each gene name, ensured by various combinations of: i) transcript identifiers, ii) ORF identifiers and iii) start:stop codons. DIA raw MS data were analyzed with Spectronaut v18 using an in-house generated sample-specific fasta, comprised of the reviewed human proteome (proteome ID: UP000005640, 20 426 proteins, downloaded Sep 2023) and lncRNA-encoded protein sequences (125 223 proteins), in directDIA mode. The default settings were used unless otherwise noted. Data filtering was set to “Qvalue”. False discovery rate (FDR) was set to 1% at peptide precursor level and 1% at protein level. Top3 peptide precursors were used for protein quantification. The downstream data analysis was performed using in-house developed R scripts. PCA projection The 1000 fiber and myopathy data sets were initially filtered to remove non-overlapping proteins. Then, they were combined and normalized using the normalizeBetweenArrays function from the limma package. The normalization method used was “quantile” to ensure that both data sets had the same distributions and were comparable. Subsequently, the merged data set was divided back into the two separate data sets, namely the 1000 fiber data set and the myopathy data set. For the 1000 fiber data set, PCA was calculated using the prcomp function. Moving on, the myopathy data set was multiplied by the PC loadings obtained from the 1000 fiber dataset to generate its PCA projection. Finally, the PCA projections from the myopathy samples were plotted on the top of the 1000 fiber PCA visualization. The 1000 fiber and myopathy data sets were initially filtered to remove non-overlapping proteins. Then, they were combined and normalized using the normalizeBetweenArrays function from the limma package. The normalization method used was “quantile” to ensure that both data sets had the same distributions and were comparable. Subsequently, the merged data set was divided back into the two separate data sets, namely the 1000 fiber data set and the myopathy data set. For the 1000 fiber data set, PCA was calculated using the prcomp function. Moving on, the myopathy data set was multiplied by the PC loadings obtained from the 1000 fiber dataset to generate its PCA projection. Finally, the PCA projections from the myopathy samples were plotted on the top of the 1000 fiber PCA visualization. The skeletal muscle-specific ribosomal gene signature, consisting of log2 fold change values comparing the mRNA expression of ribosomal subunits in skeletal muscle compared to 52 other human tissues, was downloaded from Panda et al . Log2 fold change values were ranked to identify ribosomal proteins with the highest overexpression in human skeletal muscle. The human 80S ribosome structural model (Protein Data Bank: 4V6X) was downloaded from the Protein Data Bank website (RCSB PDB). Visualization and editing of the ribosomal structure, and preparation of figures and movies were performed in UCSF ChimeraX . Dot blot was conducted following a previously described protocol with a few modifications . Initially, two identical PVDF membranes were activated using 96% ethanol and washed with transfer buffer. Subsequently, the membranes were placed on wet filter paper with transfer buffer until they dried. Next, 1 µL of fiber lysate was spotted at the same position on both membranes, and the membranes were allowed to dry. Reactivation of the membranes was carried out using 96% ethanol, followed by gentle washing with TBST. The membranes were then blocked in TBST containing 5% skim milk for 15 minutes. After blocking, the membranes were washed three times with TBST and incubated with the primary antibody solution of either anti-MYH7 (A4.840) or anti-MYH2 (A4.74), both from Developmental Studies Hybridoma Bank (DSHB) at a dilution of 1:200 in TBST containing 1% skim milk for one hour. Subsequently, the membranes were gently washed three times with TBST and incubated with the secondary antibody (anti-mouse) at a dilution of 1:20,000 in TBST containing 1% skim milk for two hours. Finally, the membranes were washed three times for five minutes each with TBST and visualized using Immobilon Forte (Milipore) in a ChemiDoc XRS+ (Bio-Rad) imaging system. 8 µm sections from three different fixed-frozen human muscle biopsies were used for RNAscope labeling and subsequent immunohistochemistry (IHC). For detection of RP11-255P5.3 and LINC01405 commercially available RNAscope Multiplex Fluorescent Assay V2 (Advanced Cell Diagnostics) and probes against Lnc-ERCC5-5-C1 (# 1276851-C1), and LINC01405-C2 (# 549201-C2) (Advanced Cell Diagnostics), were used according to the manufacturer’s protocols. To control tissue quality Positive 3-plex probe (# 320881) and negative 3-plex probe (# 320871) were used. To visualize different subtypes of muscle fibers, sections after RNAscope were blocked with 5% donkey serum and incubated with antibodies against MYH2 (A4.74-s (1:5); DSHB) and MYH7 (A4.840 (1:5); DSHB) overnight. After washing sections were incubated with Alexa Flour 488-conjugated Donkey Anti-Mouse IgG, Fcγ Subclass 1 and DyLightTM405-conjugated Donkey Anti-Mouse IgM secondary antibodies respectively and mounted with ProLong™ Diamond Antifade Mountant (Invitrogen). Slides were imaged using Zeiss Axio Observer microscope equipped with Axiocam 702 camera. Biopsies from three individuals were used for quantification, with 187 muscle fibers being counted in total. As in RNAscope each dot corresponds to one RNA molecule, the number of dots/mm 2 was used as a measure of RNA expression. We first determined the number of dots/mm 2 within each fiber for both probes, then averaged the results by fiber type and participant. The given average was then used as an input for a two sample t-test. Immunolabelling was performed on 10 μm cryosections, fixed in 4% PFA (10 min), permeabilized in 0.1% Triton X-100 (20 min) and blocked in 10% Normal Goat Serum (50062Z, Life Technologies) with 0.1% BSA (1 h). Sections were incubated o/n (4 o C) with primary antibodies against MYH7 (mouse monoclonal A4.951, Santa Cruz, sc-53090, diluted 1:25) or MYH2 (mouse monoclonal SC71, DSHB, 1:25), each combined with primary antibody against TNNT1 (rabbit polyclonal HPA058448, Sigma, diluted 1:500) in 5% goat serum with 0.1% BSA and 0.1% Triton X-100. Alexa Fluor Goat anti-Mouse 647 (A21237) was used as the secondary antibody for the MYHs and Alexa Fluor Donkey anti-Rabbit 488 (A11034) for the TNNT1 (Life Technologies, 1:500 each in 10% Normal Goat Serum). Fluorescent images were obtained with a 10x objective on a Zeiss Axio Observer 3 fluorescence microscope with a Colibri 5 led detector, combined with Zeiss Axiocam 705 mono camera, using Zen software (Zeiss). For visualization purposes, a selection of fibers were mounted on copper grids glued on a microscopy slide, and imaged under a stereomicroscope. Further information on research design is available in the linked to this article. Supplementary information Description Of Additional Supplementary File Supplementary Dataset 1 Supplementary Dataset 2 Supplementary Dataset 3 Supplementary Dataset 4 Supplementary Dataset 5 Supplementary Dataset 6 Supplementary Dataset 7 Supplementary Dataset 8 Supplementary Dataset 9 Supplementary Dataset 10 Supplementary Dataset 11 Supplementary Dataset 12 Supplementary Dataset 13 Supplementary Dataset 14 Supplementary Dataset 15 Supplementary Dataset 16 Supplementary Dataset 17 Supplementary Dataset 18 Supplementary Dataset 19 Supplementary Dataset 20 Supplementary Dataset 21 Supplementary Dataset 22 Supplementary Dataset 23 Supplementary Dataset 24 Supplementary Dataset 25 Supplementary Dataset 26 Supplementary Dataset 27 Supplementary Dataset 28 Supplementary Dataset 29 Supplementary Dataset 30 Reporting Summary Transparent Peer Review file Source Data |
Microbial colonisation rewires the composition and content of poplar root exudates, root and shoot metabolomes | 63712b67-0ec3-4c56-83d0-c7fb1cd95790 | 11395995 | Microbiology[mh] | The plant-associated microbiota is considered as the second genome of the host plant. It comprises a diverse and complex range of microorganisms, including bacteria, archaea, fungi, oomycetes and viruses . Members of these microbial communities can either be detrimental, such as pathogens, or on the contrary, favourable to their host . Thus, they play essential roles in plant life traits. They promote nutrient acquisition, plant growth and resistance to biotic and abiotic stresses . The microbiota activities can be seen as the extended phenotype of plants . Plants provide a multitude of habitats for the development and proliferation of microbial communities. Regardless of whether the plant is an annual or a perennial species, microbial community composition varies significantly between the bulk soil, rhizosphere, root endosphere and phyllosphere, indicating that the plant compartment is a major selective force for the assembly of the microbiota . These microbial communities are dynamic in time and space and their assembly is regulated by both biotic (e.g. host genotype, microbe-microbe interactions) and abiotic factors (e.g. soil origin, climate, seasonal variation). Soil provides the main reservoir for root microbial communities, while both vertical (via seeds) and horizontal transmission (via soil, air, insects and/or other plants) are sources for phyllosphere microbial colonisation . Nevertheless, the relative roles of soil and air pathways for phyllosphere colonisation are not yet clearly established with contradictory results. Some evidence suggests that phyllosphere microorganisms are sourced from the soil , while other studies observe the air as the main reservoir , or a dual influence . The rhizosphere is the first compartment where the host genotype starts to influence the microbiota composition through rhizodeposits . This selection of microbial communities between soil and rhizosphere has been largely documented and results in a decrease of diversity . In the host endosphere, plant–microbe and microbe-microbe interactions are the main factors driving the assembly of the microbiota , where its diversity decreases from belowground to aboveground compartments . Although trees are long-lived perennials whose microbiota evolves over the course of their lives and the stage of forest cover , the initial assembly of the microbiota is thought to influence plant health and physiology . Previous studies have demonstrated that the microbiota of belowground and aboveground tissues of poplars ( Populus sp.) change drastically over the first months of growth in natural soil , and that both selective and stochastic factors operate in the structuring of the poplar root microbiota . On a finer scale, we have previously shown that naive poplar roots are colonised within a few days and that several waves of fungi and bacteria follow one another over the first 50 days, with saprotrophs being slowly replaced by endophytes and symbionts . While the establishment of a molecular dialogue between the root cells and the mycorrhizal and endophytic fungi likely explain the delayed colonisation by these fungi, other mechanisms presumably drive this colonisation. For example, root exudates are expected to play a key role in the chemoattraction of rhizospheric microbes and their habitat structuring . Conversely, rhizospheric microbes can systemically modulate the composition of root exudates . The composition of root exudates also depends on the plant species, genotype, developmental stage and environmental conditions, but all root exudates contain broadly the same classes of compounds derived from primary and secondary metabolisms: sugars, organic acids, amino acids, lipids, proteins, terpenes, phenolics, flavonoids . Most of the studies report root exudate composition using sterile hydroponic systems, followed by the characterisation of the role of one type of metabolites in the interaction with microbiota. Furthermore, most studies have been conducted on herbaceous plants or shrubs, whereas similar studies with trees are limited to a few species and involve very few forest trees . To our knowledge, only one study has attempted to characterise poplar root exudates, and it focused more on rhizospheric soil metabolomes rather than on actual exudates . Even less is known regarding feedback effects of rhizospheric microbiota on tree exudation. The biology and microbiota of trees are very different from those of herbaceous plants, so we cannot predict the behaviour of tree-microbiota interactions on the basis of what is known from herbaceous plants. While root exudates contribute to the initial steps of selection of the rhizospheric and root microbiota, plant metabolites also participate in the structuring of host endophytic microbiota . For instance, it has been suggested that variations in the microbiome between poplar species are linked to specific differences in defence compounds, such as the biosynthesis of phenolic glycosides (salicylates and other metabolites . Conversely, changes in the microbial composition of roots can significantly affect the metabolome of poplar roots and shoots . In light of all of these knowledge gaps, we aimed in this study to characterise the Populus tremula x tremuloides T89 root exudates, the early dynamics of the assembly of the microbiota along the root-to-shoot axis, and its interaction with shoot and root metabolite contents. This study investigated the dynamics of microbial colonisation of roots and shoots of naive poplar cuttings from the soil reservoir combined with the dynamics of root exudate composition, as well as metabolomics of roots and shoots. Naive—i.e., entirely sterile at the time of planting— Populus tremula x tremuloides T89 cuttings were cultivated in either natural or sterilised (gamma irradiated) soils for 30 days in small, closed mesocosms. Root and shoot biomass, root exudate composition, soil, rhizosphere, root and shoot microbiota over time, and root and shoot metabolomics at the final time-point were measured. We hypothesised that: (1) the presence of soil microbiota modifies metabolite contents (both composition and quantity) of root exudates; (2) the root exudation is a dynamic process in time and correlates with the assembly of specific microbial communities in the rhizosphere; (3) microbial colonisation of roots and shoots induces metabolomic changes of both roots and shoots; (4) the establishment of aboveground communities follows the same dynamics as belowground; and (5) specific microbial communities are selected from the soil reservoir to colonise the shoots.
Root exudate composition is dynamic over time and microorganisms strongly reduce the abundance of root exudate metabolites In order to investigate how microbial communities influence poplar root exudation, the metabolite profiles of root exudates were characterised over time in the presence or absence of microorganisms (Figure ). First, the possible impacts of gamma irradiation on soil fertility and growth of poplar cuttings were examined 30 days post-planting. Gamma irradiation of the soil did not have a significant impact on soil carbon or nitrogen levels, or on pH. Nor did it affect the availability of Ca, Fe, Mg, K, and Na, but gamma irradiation did reduce the levels of phosphorus in the soil by 0.3 × times (Table S1). Shoots and roots grew similarly with or without microbes (Figure S2). Between 15 and 72 metabolites were detected in the root exudates of young poplar cuttings (Fig. A). Eighty percent could be attributed to known metabolites belonging to five main classes of compounds: glycosides (23%), organic acids (13%), defence compounds (10%), lipid-related metabolites (7%), sugars (7%), and amino acids (2%) (Fig. B, Figure S3). Striking differences in the metabolite composition of root exudates were observed between poplars grown in natural or sterilised soil. The number of metabolites detected in root exudates was 3 to 5 times higher in sterilised soil in comparison with natural soil (Fig. A). Root exudates captured from poplars grown in sterilised soil were enriched in all major classes of metabolites, including defence compounds (e.g. phenylethyl-tremuloidin, salicyl alcohol, salicylic acid), organic acids (e.g. hexanoic acid, citric acid, ferulic acid), and sugars (e.g. glucose, sucrose, galactose) in comparison with root exudates from cuttings grown in natural soil (Figs. B and , Figure S3, Figure S4). Interestingly, most of the metabolites belonging to the glycoside, amino acid (e.g. 5-oxo-proline, GABA) and lipid-related (e.g. monopalmitin, monostearin, palmitic acid) classes were only detected in root exudates from poplars grown in sterilised soil (Fig. B, Figure S3). The lipid related monopalmitin and monostearin were by far the most abundant compounds found in the root exudates of poplars grown under sterile conditions, being 10 and 15 times, respectively, more abundant than the most abundant sugars and organic acids (Table S2). In addition, root exudation profiles were dynamic over the 30 days of poplar growth in both soil types but followed opposite trends. While a significant decrease of the number of metabolites of root exudates produced in natural soil was observed, this number increased significantly in sterilised soil over the 30 days (Fig. A). In sterilised soil, the production of most root exudates, belonging to diverse metabolite classes, increased significantly over time (e.g. Defence: tremuloidin, phenylethyl-tremuloidin; Organic acid: citric acid, erythronic acid; Lipid-related: monopalmitin, monostearin; Sugars: glucose, butyl-mannoside) (Fig. B). Conversely, root exudates from poplars grown in natural soil displayed increased concentration of glycerol, whereas the concentration of an unidentified glycoside (14.21 min; m/z 279) and two unidentified compounds (12.06 min; m/z 404 517 307 319, 13.21 min; m/z 235 204 217) decreased significantly over time (Figure S3). To conclude, root exudates of young poplar cuttings are dynamic over-time, from 4 to 30 days after planting, and root exudates of poplar cultivated in the presence of microorganisms contain less metabolites than in absence of microorganisms. Microbial communities from the rhizosphere but not soil evolved over time The massive alteration of root exudates in the presence of microorganisms suggests that the rhizospheric microbiota consume a large fraction of the exudates reducing their concentrations to below the level of detection and/or the existence of feedback effects of the microbiota on plant metabolism. In order to get a better understanding of the microorganisms involved in these processes, the fungal and bacterial communities of the rhizosphere and their dynamics were characterised. Given that the soil was the main reservoir of microorganisms colonising poplar habitats in our experimental design, the microbial communities present in the soil before transplanting axenic poplars were characterised first. A total of 286 ± 6 fungal operational taxonomic units (OTUs) and 941 ± 2 bacterial OTUs were detected in soil (Table S3.A). Fungal soil communities were dominated by endophytes, ectomycorrhizal fungi (EMF), and to a lesser extent saprotrophs (respectively 28 ± 1%, 26 ± 4%, 14 ± 1%) (Figure S5, Table S3.B). The endophyte Mortierella , and the EMF Inocybe , and Tuber were the most dominant fungal genera detected in soil over time (Table S3.C). Regarding arbuscular mycorrhiza fungal (AMF) communities that were tracked independently with 28S barcode sequencing, Rhizophagus , Glomus and an unidentified OTU of Glomeromycetes were the most abundant genera in soil (Table S3.D). Finally, Candidatus Udaeobacter (Verrucomicrobia) and two unidentified OTUs of the Acidobacteria phylum-dominated soil bacterial communities over the 30 days of growth (Table S3.E). Overall, soil bacterial and fungal communities, including Glomerales, remained stable over time. By contrast, the diversity and composition of fungal and bacterial communities fluctuated overtime in the rhizosphere, with the exception of AMF. After 4 days of growth, 255 ± 12 fungal OTUs were identified, which increased to 269 ± 7 by the end of the experiment. Similarly, the 950 ± 22 bacterial OTUs detected at the early time point, increased to 997 ± 5 after 30 days of growth (Table S3.A). The rhizospheric bacterial community was dominated by Proteobacteria ( Pseudomonas, Burkholderia, Oxalobacteraceae ), Verrucomicrobia (Candidatus Udaeobacter ), Bacteroidetes ( Mucilaginibacter ) and Acidobacteria (Candidatus Solibacter ) while EMF (e.g. Inocybe, Lactarius, Tomentella, Tuber ), endophytes (e.g. Mortierella, Hyaloscypha, Ilyonectria ) and to a lesser extent, saprophytes ( Umbelopsis , Bifiguratus ) dominated the fungal community (Figure S6, Figure S7, Table S3.C-E). Many of these microorganisms were enriched in the rhizosphere compared to soil, illustrating the well-known selective effect of this habitat, it is also noteworthy that Candidatus Udaeobacter , Candidatus Solibacter and Acidothermus were as abundant in the rhizosphere and soil, representing more than 15% of the reads in these two habitats. Furthermore, different dynamics of colonisation were observed in the rhizosphere among the dominant bacterial and fungal genera (> 3%, p.adj ≤ 0.05). The relative abundance of most of the bacterial genera that were strongly enriched in the rhizosphere compared to soil, such as Burkholderia , Pseudomonas and Mucilaginibacter , decreased significantly over time, while members of Candidatus Udaeobacter , Candidatus Solibacter and Acidothermus remained stable from T4 to T30. Regarding fungi, despite no significant difference in the abundance of the main fungal trophic guilds over time (p.adj > 0.05), the relative abundances of the saprotroph Bifiguratus and the EMF Inocybe increased significantly, while the EMF Lactarius decreased over time. The fungal endophyte Ilyonectria was only significantly more abundant at T15, but not at T30 (Figure S6, Figure S7, Table S3.C). Lastly, as observed in the soil compartment, Glomus , Claroideoglomus and Rhizophagus dominated the rhizosphere compartment and remained stable over the 30 days of growth. Overall, while microbial communities in the soil remained stable over the experiment, the assembly of the communities belonging to the rhizosphere was dynamic over time, with some dominant fungal and bacterial genera found only transiently. Microbial colonisation from belowground to aboveground compartments Among the dominant fungal and bacterial genera in the rhizosphere that have been only found transiently, microorganisms, such as Pseudomonas or Ilyonectria , are known to be potential root and leaf endophytes . We thus surmised whether this timely detection in the rhizosphere reflected a transitory movement towards their final habitat (root and/or shoot), or whether they were outcompeted by other microorganisms. To answer this question, the dynamics of the fungal and bacterial communities from the rhizosphere to the roots and the shoots were followed. Microbial colonisation was rapid and highly dynamic in both belowground and aboveground compartments (Fig. , Table S3). Fungal and bacterial taxa in root systems were detected as soon as after 1 day of growth, and bacterial communities were already present in shoots, even though their relative abundance was variable among the samples, and thus, not considered in the analyses. After only 4 days, both bacterial and fungal communities were established in roots and shoots. Fungal endophytes dominated both root and shoot fungal communities at the early time points and decreased over time. While EMF dominated the late stage of root colonisation, saprotrophs and pathogens were the most abundant fungal guilds detected in shoots after 30 days. In contrast, the dominant Glomerales, including Glomus and Rhizophagus , remained stable in roots over time (Figure S6, Figure S7, Table S3.D). As indicated by the analyses of microbial structure, early root and shoot fungal communities were closely related before differentiating over time (Fig. , Table S4). The fungal endophyte Mortierella and the saprotroph Umbelopsis drove both root and shoot early fungal communities before vanishing from those compartments at later stages of colonisation (Figs. and ). These fungal genera were later replaced by specific taxa depending on the plant compartments. A core fungal microbiota was detected where some taxa assembled in both compartments, although other microorganisms were specific to a particular niche (Figure S6, Figure S7). For example, the endophyte Ilyonectria colonised both roots and shoots in similar relative abundances, whereas Trichocladium , Colletotrichum and Clonostachys dominated aboveground compartments, and the fungal endophyte Hyaloscypha and the EMF Mallocybe , Inocybe and Tomentella prevailed in belowground compartments (Figs. and , Figure S8). It is noteworthy that these EMF were also detected in shoots, not being an isolated event, as they remained detectable at low levels until 30 days of growth. Even though this effect was less striking for bacteria than for fungal communities, the transfer of bacterial genera from belowground to aboveground compartments was also detected (Figs. and , Figure S9). Mucilaginibacter , Pseudomonas and Burkholderia-Caballeronia-Paraburkholderia were present in root systems at an early stage of colonisation before prevailing in aerial compartments, while Asticcacaulis , an unidentified OTU of the Comamonadaceae family, and Acidothermus dominated belowground compartments (Figs. and , Figure S9). To conclude, the microbial colonisation of the root and shoot habitats evolved over time, through the transition of a core and a specific microbiota from below to aboveground compartments. EMF dominated root systems while saprotrophs and pathogens dominated the shoots. Microbial communities alter belowground and aboveground poplar metabolite composition Having demonstrated that root exudates were strongly impacted by microbial presence and that both roots and shoots were massively colonised by complex, dynamic and specific microbial communities, we investigated how microbial colonisation influenced the root and shoot metabolomes. The metabolomic profiles of belowground and aboveground compartments were characterised after 30 days of growth in the presence or absence of microorganisms and were correlated with microbial communities. Similar to root exudate responses, metabolite richness and diversity were higher in poplars grown in sterilised soil after 30 days in comparison with natural soil, particularly in roots. For poplars grown in natural soil, 64 and 90 metabolites in belowground and aboveground compartments were detected respectively, which increased to 81 and 96 metabolites, respectively, in sterilised soil (Fig. , Figure S4). Microbial colonisation induced greater variation in metabolite concentrations in the roots than the shoots (Fig. , Figure S4). Most striking was the reduction of the levels of most amino acids, as well as glucose and fructose, in roots in the presence of microorganisms. By contrast, levels of glycerol, sucrose, and trehalose increased in roots of poplars grown in natural soil. Microbial colonisation also led to an increase of sterol levels in roots and of the unsaturated fatty acids, α-linolenic and linoleic acid in both roots and shoots (Fig. ). Interestingly, the majority of defence metabolites and their conjugates were detected in the aerial compartments (e.g. trichocarpin, tremulacin, salicyltremuloidin) and varied differently depending on plant organ (Fig. ). Tremuloidin decreased significantly in root systems from sterile soil, but it increased in shoots of poplars grown in the same soil (Fig. ). Surprisingly, salicylic acid and salicyltremuloidin were more readily detected in both roots and shoots of poplars grown in absence of microbes (Fig. ). Overall, our data show that as early as 30 days post-planting, root and shoot metabolomes of naive poplar cuttings are strongly modified by root microbial communities. Correlations between poplar metabolites and microbial taxa abundances After showing that microorganisms alter the metabolite profiles of poplar in both belowground and aboveground habitats, we investigated whether the presence of particular microbial communities was associated with specific metabolites in root exudates, roots and shoots using multiple regressions models through redundancy analyses (RDA). Significant correlations between root exudates and microbial communities were observed over time (Fig. ). Although only a novel metabolite, tentatively identified as 2- o -Benzoyl- p -toluic acid glucoside, was positively associated with the fungi Mortierella and Inocybe , 4 compounds (glycerol, L-tartaric acid, glyceric acid, and the unidentified 13.56 min; m/z 273 363) were correlated with bacterial genera (Fig. A). The relative abundance of early associated Pseudomonas (Gamma Proteobacteria), Pedobacter (Bacteroidetes), Burkholderia and Cupriavidus (Beta Proteobacteria), Rhizobium (Alpha Proteobacteria) and Mucilaginibacter (Bacteroidetes) were positively correlated with the levels of two organic acids, L-tartaric acid and glyceric acid. In contrast, the levels of late bacterial taxa were more (e.g. Ktedonobacteraceae JG30a and Bryobacter ) or less associated with glycerol (e.g. Bdellovibrio ), respectively which was enriched at the end of the experiment (Fig. B). Within plant tissues, RDA analyses revealed associations between 6 fungal taxa and 16 metabolites (Fig. A), and 14 bacterial taxa and 27 metabolites (Fig. B). Associations between shoot metabolites and microbes tended to be more numerous than those between root metabolites and microbes. Four metabolites—the sugar acid/alcohol, xylonic acid and threitol, the defence compound salicylic acid, and the antioxidant alpha-tocopherol—were positively correlated with fungal and bacterial taxa in shoots (Fig. A,B). The EMF Inocybe , the endophyte Hyaloscypha and the saprophyte Luellia that all mainly colonised roots were positively associated with 4 metabolites that were only detected in the roots at T30, including the lipid-related metabolite tetracosanoic acid, the two glycosides purpurein and grandidentatin, and 2-hydroxyglutaconic acid (Fig. A). Additionally, these fungi were significantly positively correlated with sucrose. Conversely, the shoot-associated fungi, Clonostachys , Bifiguratusi and Trichocladium , were positively associated with several metabolites that were enriched in the shoots, including the defence metabolite salicylic acid, sugar alcohol/acid (threitol, xylonic acid), organic acids (erythronic acid, maleic acid), lignin precursor (caffeic acid) and several other compounds of unknown function (e.g. o -cresol glucoside…) (Fig. A). Regarding bacteria, only one uncharacterised metabolite (9.60 min; m/z 228 110 291) was found associated with bacterial taxa in roots (Fig. B). In contrast, 10 bacterial genera that were enriched in shoot tissues were associated at different degrees with shoot metabolites. The strongest associations were found for OTUs of the Oxalobacteraceae and Micrococcaceae families, and the genera Mucilaginibacter (Bacteroidetes) and Catenulispora (Actinomycetes) with several defence compounds (salirepin, salicylic acid, tremulacin), organic acids (malic acid, aconitric acid, galactonic acid) and several glucosides (Fig. B). Those compounds were also less strongly associated with bacteria belonging to Dyella , Pseudomonas (Gamma Proteobacteria), Pedobacter (Bacteroidetes), Burkholderia (Beta Proteobacteria) and Rhizobium (Alpha Proteobacteria) (Fig. B). Overall, these RDA analyses revealed metabolite and microbial biomarkers in root and shoot tissues, with specific metabolites highly enriched in either tissue.
In order to investigate how microbial communities influence poplar root exudation, the metabolite profiles of root exudates were characterised over time in the presence or absence of microorganisms (Figure ). First, the possible impacts of gamma irradiation on soil fertility and growth of poplar cuttings were examined 30 days post-planting. Gamma irradiation of the soil did not have a significant impact on soil carbon or nitrogen levels, or on pH. Nor did it affect the availability of Ca, Fe, Mg, K, and Na, but gamma irradiation did reduce the levels of phosphorus in the soil by 0.3 × times (Table S1). Shoots and roots grew similarly with or without microbes (Figure S2). Between 15 and 72 metabolites were detected in the root exudates of young poplar cuttings (Fig. A). Eighty percent could be attributed to known metabolites belonging to five main classes of compounds: glycosides (23%), organic acids (13%), defence compounds (10%), lipid-related metabolites (7%), sugars (7%), and amino acids (2%) (Fig. B, Figure S3). Striking differences in the metabolite composition of root exudates were observed between poplars grown in natural or sterilised soil. The number of metabolites detected in root exudates was 3 to 5 times higher in sterilised soil in comparison with natural soil (Fig. A). Root exudates captured from poplars grown in sterilised soil were enriched in all major classes of metabolites, including defence compounds (e.g. phenylethyl-tremuloidin, salicyl alcohol, salicylic acid), organic acids (e.g. hexanoic acid, citric acid, ferulic acid), and sugars (e.g. glucose, sucrose, galactose) in comparison with root exudates from cuttings grown in natural soil (Figs. B and , Figure S3, Figure S4). Interestingly, most of the metabolites belonging to the glycoside, amino acid (e.g. 5-oxo-proline, GABA) and lipid-related (e.g. monopalmitin, monostearin, palmitic acid) classes were only detected in root exudates from poplars grown in sterilised soil (Fig. B, Figure S3). The lipid related monopalmitin and monostearin were by far the most abundant compounds found in the root exudates of poplars grown under sterile conditions, being 10 and 15 times, respectively, more abundant than the most abundant sugars and organic acids (Table S2). In addition, root exudation profiles were dynamic over the 30 days of poplar growth in both soil types but followed opposite trends. While a significant decrease of the number of metabolites of root exudates produced in natural soil was observed, this number increased significantly in sterilised soil over the 30 days (Fig. A). In sterilised soil, the production of most root exudates, belonging to diverse metabolite classes, increased significantly over time (e.g. Defence: tremuloidin, phenylethyl-tremuloidin; Organic acid: citric acid, erythronic acid; Lipid-related: monopalmitin, monostearin; Sugars: glucose, butyl-mannoside) (Fig. B). Conversely, root exudates from poplars grown in natural soil displayed increased concentration of glycerol, whereas the concentration of an unidentified glycoside (14.21 min; m/z 279) and two unidentified compounds (12.06 min; m/z 404 517 307 319, 13.21 min; m/z 235 204 217) decreased significantly over time (Figure S3). To conclude, root exudates of young poplar cuttings are dynamic over-time, from 4 to 30 days after planting, and root exudates of poplar cultivated in the presence of microorganisms contain less metabolites than in absence of microorganisms.
The massive alteration of root exudates in the presence of microorganisms suggests that the rhizospheric microbiota consume a large fraction of the exudates reducing their concentrations to below the level of detection and/or the existence of feedback effects of the microbiota on plant metabolism. In order to get a better understanding of the microorganisms involved in these processes, the fungal and bacterial communities of the rhizosphere and their dynamics were characterised. Given that the soil was the main reservoir of microorganisms colonising poplar habitats in our experimental design, the microbial communities present in the soil before transplanting axenic poplars were characterised first. A total of 286 ± 6 fungal operational taxonomic units (OTUs) and 941 ± 2 bacterial OTUs were detected in soil (Table S3.A). Fungal soil communities were dominated by endophytes, ectomycorrhizal fungi (EMF), and to a lesser extent saprotrophs (respectively 28 ± 1%, 26 ± 4%, 14 ± 1%) (Figure S5, Table S3.B). The endophyte Mortierella , and the EMF Inocybe , and Tuber were the most dominant fungal genera detected in soil over time (Table S3.C). Regarding arbuscular mycorrhiza fungal (AMF) communities that were tracked independently with 28S barcode sequencing, Rhizophagus , Glomus and an unidentified OTU of Glomeromycetes were the most abundant genera in soil (Table S3.D). Finally, Candidatus Udaeobacter (Verrucomicrobia) and two unidentified OTUs of the Acidobacteria phylum-dominated soil bacterial communities over the 30 days of growth (Table S3.E). Overall, soil bacterial and fungal communities, including Glomerales, remained stable over time. By contrast, the diversity and composition of fungal and bacterial communities fluctuated overtime in the rhizosphere, with the exception of AMF. After 4 days of growth, 255 ± 12 fungal OTUs were identified, which increased to 269 ± 7 by the end of the experiment. Similarly, the 950 ± 22 bacterial OTUs detected at the early time point, increased to 997 ± 5 after 30 days of growth (Table S3.A). The rhizospheric bacterial community was dominated by Proteobacteria ( Pseudomonas, Burkholderia, Oxalobacteraceae ), Verrucomicrobia (Candidatus Udaeobacter ), Bacteroidetes ( Mucilaginibacter ) and Acidobacteria (Candidatus Solibacter ) while EMF (e.g. Inocybe, Lactarius, Tomentella, Tuber ), endophytes (e.g. Mortierella, Hyaloscypha, Ilyonectria ) and to a lesser extent, saprophytes ( Umbelopsis , Bifiguratus ) dominated the fungal community (Figure S6, Figure S7, Table S3.C-E). Many of these microorganisms were enriched in the rhizosphere compared to soil, illustrating the well-known selective effect of this habitat, it is also noteworthy that Candidatus Udaeobacter , Candidatus Solibacter and Acidothermus were as abundant in the rhizosphere and soil, representing more than 15% of the reads in these two habitats. Furthermore, different dynamics of colonisation were observed in the rhizosphere among the dominant bacterial and fungal genera (> 3%, p.adj ≤ 0.05). The relative abundance of most of the bacterial genera that were strongly enriched in the rhizosphere compared to soil, such as Burkholderia , Pseudomonas and Mucilaginibacter , decreased significantly over time, while members of Candidatus Udaeobacter , Candidatus Solibacter and Acidothermus remained stable from T4 to T30. Regarding fungi, despite no significant difference in the abundance of the main fungal trophic guilds over time (p.adj > 0.05), the relative abundances of the saprotroph Bifiguratus and the EMF Inocybe increased significantly, while the EMF Lactarius decreased over time. The fungal endophyte Ilyonectria was only significantly more abundant at T15, but not at T30 (Figure S6, Figure S7, Table S3.C). Lastly, as observed in the soil compartment, Glomus , Claroideoglomus and Rhizophagus dominated the rhizosphere compartment and remained stable over the 30 days of growth. Overall, while microbial communities in the soil remained stable over the experiment, the assembly of the communities belonging to the rhizosphere was dynamic over time, with some dominant fungal and bacterial genera found only transiently.
Among the dominant fungal and bacterial genera in the rhizosphere that have been only found transiently, microorganisms, such as Pseudomonas or Ilyonectria , are known to be potential root and leaf endophytes . We thus surmised whether this timely detection in the rhizosphere reflected a transitory movement towards their final habitat (root and/or shoot), or whether they were outcompeted by other microorganisms. To answer this question, the dynamics of the fungal and bacterial communities from the rhizosphere to the roots and the shoots were followed. Microbial colonisation was rapid and highly dynamic in both belowground and aboveground compartments (Fig. , Table S3). Fungal and bacterial taxa in root systems were detected as soon as after 1 day of growth, and bacterial communities were already present in shoots, even though their relative abundance was variable among the samples, and thus, not considered in the analyses. After only 4 days, both bacterial and fungal communities were established in roots and shoots. Fungal endophytes dominated both root and shoot fungal communities at the early time points and decreased over time. While EMF dominated the late stage of root colonisation, saprotrophs and pathogens were the most abundant fungal guilds detected in shoots after 30 days. In contrast, the dominant Glomerales, including Glomus and Rhizophagus , remained stable in roots over time (Figure S6, Figure S7, Table S3.D). As indicated by the analyses of microbial structure, early root and shoot fungal communities were closely related before differentiating over time (Fig. , Table S4). The fungal endophyte Mortierella and the saprotroph Umbelopsis drove both root and shoot early fungal communities before vanishing from those compartments at later stages of colonisation (Figs. and ). These fungal genera were later replaced by specific taxa depending on the plant compartments. A core fungal microbiota was detected where some taxa assembled in both compartments, although other microorganisms were specific to a particular niche (Figure S6, Figure S7). For example, the endophyte Ilyonectria colonised both roots and shoots in similar relative abundances, whereas Trichocladium , Colletotrichum and Clonostachys dominated aboveground compartments, and the fungal endophyte Hyaloscypha and the EMF Mallocybe , Inocybe and Tomentella prevailed in belowground compartments (Figs. and , Figure S8). It is noteworthy that these EMF were also detected in shoots, not being an isolated event, as they remained detectable at low levels until 30 days of growth. Even though this effect was less striking for bacteria than for fungal communities, the transfer of bacterial genera from belowground to aboveground compartments was also detected (Figs. and , Figure S9). Mucilaginibacter , Pseudomonas and Burkholderia-Caballeronia-Paraburkholderia were present in root systems at an early stage of colonisation before prevailing in aerial compartments, while Asticcacaulis , an unidentified OTU of the Comamonadaceae family, and Acidothermus dominated belowground compartments (Figs. and , Figure S9). To conclude, the microbial colonisation of the root and shoot habitats evolved over time, through the transition of a core and a specific microbiota from below to aboveground compartments. EMF dominated root systems while saprotrophs and pathogens dominated the shoots.
Having demonstrated that root exudates were strongly impacted by microbial presence and that both roots and shoots were massively colonised by complex, dynamic and specific microbial communities, we investigated how microbial colonisation influenced the root and shoot metabolomes. The metabolomic profiles of belowground and aboveground compartments were characterised after 30 days of growth in the presence or absence of microorganisms and were correlated with microbial communities. Similar to root exudate responses, metabolite richness and diversity were higher in poplars grown in sterilised soil after 30 days in comparison with natural soil, particularly in roots. For poplars grown in natural soil, 64 and 90 metabolites in belowground and aboveground compartments were detected respectively, which increased to 81 and 96 metabolites, respectively, in sterilised soil (Fig. , Figure S4). Microbial colonisation induced greater variation in metabolite concentrations in the roots than the shoots (Fig. , Figure S4). Most striking was the reduction of the levels of most amino acids, as well as glucose and fructose, in roots in the presence of microorganisms. By contrast, levels of glycerol, sucrose, and trehalose increased in roots of poplars grown in natural soil. Microbial colonisation also led to an increase of sterol levels in roots and of the unsaturated fatty acids, α-linolenic and linoleic acid in both roots and shoots (Fig. ). Interestingly, the majority of defence metabolites and their conjugates were detected in the aerial compartments (e.g. trichocarpin, tremulacin, salicyltremuloidin) and varied differently depending on plant organ (Fig. ). Tremuloidin decreased significantly in root systems from sterile soil, but it increased in shoots of poplars grown in the same soil (Fig. ). Surprisingly, salicylic acid and salicyltremuloidin were more readily detected in both roots and shoots of poplars grown in absence of microbes (Fig. ). Overall, our data show that as early as 30 days post-planting, root and shoot metabolomes of naive poplar cuttings are strongly modified by root microbial communities.
After showing that microorganisms alter the metabolite profiles of poplar in both belowground and aboveground habitats, we investigated whether the presence of particular microbial communities was associated with specific metabolites in root exudates, roots and shoots using multiple regressions models through redundancy analyses (RDA). Significant correlations between root exudates and microbial communities were observed over time (Fig. ). Although only a novel metabolite, tentatively identified as 2- o -Benzoyl- p -toluic acid glucoside, was positively associated with the fungi Mortierella and Inocybe , 4 compounds (glycerol, L-tartaric acid, glyceric acid, and the unidentified 13.56 min; m/z 273 363) were correlated with bacterial genera (Fig. A). The relative abundance of early associated Pseudomonas (Gamma Proteobacteria), Pedobacter (Bacteroidetes), Burkholderia and Cupriavidus (Beta Proteobacteria), Rhizobium (Alpha Proteobacteria) and Mucilaginibacter (Bacteroidetes) were positively correlated with the levels of two organic acids, L-tartaric acid and glyceric acid. In contrast, the levels of late bacterial taxa were more (e.g. Ktedonobacteraceae JG30a and Bryobacter ) or less associated with glycerol (e.g. Bdellovibrio ), respectively which was enriched at the end of the experiment (Fig. B). Within plant tissues, RDA analyses revealed associations between 6 fungal taxa and 16 metabolites (Fig. A), and 14 bacterial taxa and 27 metabolites (Fig. B). Associations between shoot metabolites and microbes tended to be more numerous than those between root metabolites and microbes. Four metabolites—the sugar acid/alcohol, xylonic acid and threitol, the defence compound salicylic acid, and the antioxidant alpha-tocopherol—were positively correlated with fungal and bacterial taxa in shoots (Fig. A,B). The EMF Inocybe , the endophyte Hyaloscypha and the saprophyte Luellia that all mainly colonised roots were positively associated with 4 metabolites that were only detected in the roots at T30, including the lipid-related metabolite tetracosanoic acid, the two glycosides purpurein and grandidentatin, and 2-hydroxyglutaconic acid (Fig. A). Additionally, these fungi were significantly positively correlated with sucrose. Conversely, the shoot-associated fungi, Clonostachys , Bifiguratusi and Trichocladium , were positively associated with several metabolites that were enriched in the shoots, including the defence metabolite salicylic acid, sugar alcohol/acid (threitol, xylonic acid), organic acids (erythronic acid, maleic acid), lignin precursor (caffeic acid) and several other compounds of unknown function (e.g. o -cresol glucoside…) (Fig. A). Regarding bacteria, only one uncharacterised metabolite (9.60 min; m/z 228 110 291) was found associated with bacterial taxa in roots (Fig. B). In contrast, 10 bacterial genera that were enriched in shoot tissues were associated at different degrees with shoot metabolites. The strongest associations were found for OTUs of the Oxalobacteraceae and Micrococcaceae families, and the genera Mucilaginibacter (Bacteroidetes) and Catenulispora (Actinomycetes) with several defence compounds (salirepin, salicylic acid, tremulacin), organic acids (malic acid, aconitric acid, galactonic acid) and several glucosides (Fig. B). Those compounds were also less strongly associated with bacteria belonging to Dyella , Pseudomonas (Gamma Proteobacteria), Pedobacter (Bacteroidetes), Burkholderia (Beta Proteobacteria) and Rhizobium (Alpha Proteobacteria) (Fig. B). Overall, these RDA analyses revealed metabolite and microbial biomarkers in root and shoot tissues, with specific metabolites highly enriched in either tissue.
It is now well demonstrated that nearly all tissues of plants harbour microbial communities and that plants, including poplars, offer different habitats selecting a contrasting microbiota, particularly between roots and shoots . Extensive research has been carried out to determine the factors controlling the structuring of rhizosphere and root microbial communities in a wide range of plants, from herbaceous plants to trees . In Populus sp., like many plant species, soil is the major driver determining the rhizosphere microbiota . Rhizodeposition and factors dependent on the host tree, such as immunity, and for poplar, salicylate-related compounds, are thought to refine the selection of the microbiota in the rhizosphere . However, most studies are based on comparing microbiota between tissues at a given time, and less is known about the process of colonisation and selection of microbiota between tissues, particularly for perennials . Here, the very early dynamic nature of this process was investigated to capture the different stages of the colonisation by fungi and bacteria of belowground and aboveground tissues. We combined these data with the analyses of the composition of root exudates of 1-month-old Populus tremula x tremuloides T89 cuttings over 30 days and of root and shoot metabolomes at 30 days. Evidence indicate that (1) the presence of microbiota massively modifies the composition of root exudate, and root and shoot metabolomes, (2) root exudation is dynamic over time, as is the microbiota of the rhizosphere, roots and shoots, (3) soil is a reservoir of microorganisms for the colonisation of shoots and (4) roots and shoots are first colonised by the same microorganisms that are later replaced by habitat-specific taxa. Composition of Populus tremula x tremuloides root exudates from young cuttings Limited information exists regarding the composition of poplar root exudates. A study by Li et al. endeavoured to analyse the root exudates of four poplar species, establishing correlations between poplar root exudate metabolites and the predominant bacterial taxa in the rhizosphere. However, this study primarily focused on collecting rhizospheric soil metabolomes rather than authentic root exudates, complicating direct comparisons with our research. In our study, we investigated the root exudates of Populus tremula x tremuloides T89, revealing a rich composition encompassing sugars, organic acids, glucosides and various phenolic compounds, including flavonoids and lipid-related metabolites. This composition aligns with existing research on non-perennial species, confirming poplar root exudates as a carbon-rich environment, which likely serve as nutrients to feed soil microbes and then participate in the soil C-cycle . As well as being a source of nutrients, root exudates also contain secondary metabolites, notably phenolic compounds, capable of regulating the growth of microorganisms . More specifically to poplars, salicylates (tremuloidin, salicin, populin, salicylic acid) and their derivatives were identified within the root exudates. Although salicylic acid (SA) has been detected in the root exudates of several non-perennial species , concentrations and diversity of salicylates tend to be lower in the root exudates of other plants. These compounds exhibit multifaceted functions in root exudates acting as deterrents at low concentrations for microorganisms , while attracting saprotrophs capable of breaking them down , or participating in phosphate solubilisation in soil . In Arabidopsis , the abundance of some bacterial communities is impacted in response to SA signalling, and it is partly explained, in part, by the use of SA as a C source for bacterial growth or as an immune signal . Clocchiatti et al. demonstrated that the combination of SA acid and primary metabolites induces a shift in the balance between fungi and bacteria, favouring the growth of saprotrophic fungi. Finding the same compounds in poplar root exudates led us to propose that salicylate compounds not only play a role for selection of endosphere microbiota, as notably suggested by Veach et al. , but also contribute to the selection of microbes from the soil reservoir. Lipid-related metabolites involved in root-microbe signalling versus sustaining endophytic microbial growth Interestingly, our study indicated a lipid-related signature for poplar root exudates and roots. Whereas mainly fatty acids were detected in poplar root exudates, sterols and specific fatty acids were also identified in poplar roots. Fatty acids in root exudates were specifically detected from poplars cultivated in absence of microbes. This is consistent with a possible role of these fatty acids in plant root-microbe signalling . Fatty acids from Pinus sylvestris root exudates have been previously shown to stimulate the growth of the EMF Laccaria and Leccinum . Additionally, the observed consumption of fatty acids at 30 days is consistent with the detected presence of arbuscular mycorrhizal fungi ( Rhizophagus , Glomus OTUs). Indeed, several studies demonstrated lipid transfer from the plant to the AM-fungi in the form of monoacylglycerol containing C16-fatty acids . In contrast, root microbial colonisation induced higher concentration of phytosterols (campsterol,stigmasterol; β-sitosterol) and fatty acids (stearic acid, α-linoleic acid, linoleic acid) in roots. These accumulations likely support the demand for plant membrane remodelling to sustain microbial colonisation, in particular for arbuscule formation by AMF, but also for other types of endophytic fungi . Because we cannot distinguish the microbial or plant origin of the fatty acids, we can also hypothesise that their higher concentration reflects the intense growth of fungi inside the roots. Reduction in root exudates: active consumption by microbes or negative feedback? Massive differences between root exudates from poplar trees grown in the presence or absence of microbes were detected as early as 4 days after planting. Lipid-related metabolites, sugars, organic acids, amino acids, salicylates and derivatives were greatly depleted in presence of microbes as early as 4 days post plantation. At this time-point, mainly bacteria and saprotrophic fungi were colonising the rhizosphere and the roots, which argues in favour of an important role of bacteria and saprotrophic fungi in root exudate consumption, including fatty acids and defence compounds. The degradation of these defence compounds may later facilitate the development of sensitive microorganisms that could not have developed in the presence of phenolics . At later time points (T15 and T30), the same trend for metabolite depletion of root exudates in presence of microbes was found. The question remains whether soil microbes consumed poplar root exudates as nutrients sources or to degrade toxic defence compounds, allowing the entry of endophytic microbes. Another non-exclusive hypothesis to explain the reduction of exudates production triggered by microbial colonisation is the negative self-regulation of the plant metabolism. However, all studies performed so far demonstrated that the presence of microorganisms promoted either the root exudation as compared to axenic solutions or chemical changes in root exudates . Downregulation of root exudation by poplars following microbial colonisation would therefore be a new and hitherto undescribed behaviour. However, this is reminiscent of the “cry for help” concept supported by different studies showing that in response to biotic stress, plants attract beneficial microorganisms by modifying root exudation . The absence of microorganisms is in fact an abnormal situation for the plant and could be potentially perceived as a stress. Further experiments will be needed to disentangle between these two non-exclusive hypotheses: microbial consumption or negative self-regulation? Yet, the presence of organic acids, such as glyceric acid and tartaric acid, were positively correlated with the relative abundance of early-stage bacterial taxa including Pseudomonas , Burkholderia and Mucilaginibacter . This correlation may be due either to the production of these organic acids by the bacteria, or to their production by the trees to serve as chemoattractants for bacteria that would thus be selected from the rhizosphere reservoir . Interestingly, late-stage bacterial taxa (e.g. Ktedonobacteraceae JG30a , Bryobacter ) were positively correlated with glycerol but negatively correlated with glyceric acid. Given that glyceric acid is primarily derived from the microbial oxidation of glycerol , metabolite turnover represents a selection process either acting as a repellent or attractant depending on the bacterial taxa. As expected, poplar root exudates contained low-molecular-weight carboxylates, such as malic acid, citric acid and aconitic acid. These compounds were enriched in root exudates of poplars cultivated in sterilised soils. This is consistent with the mechanism of plants increasing P-uptake by secreting carboxylates that can displace immobilised P from inorganic and organic soil compounds . The levels of these carboxylates were lower in root exudates from poplars cultivated in the presence of microbes, whereas the amount of oxalic acid was higher in roots of the same poplars. Oxalic acid production may also be of fungal origin. It could serve as a signal molecule for the mycophagous bacterium Collimonas , which has been detected in the roots of poplar in isolated cases . Taken together, these data suggest that P-mobilisation by plant-produced organic acids is mainly occurring in absence of microbes associated with the root systems. It can be hypothesised that organic acids have a dual role: P-scavenging in absence of microbes and a C source for microbes. Modification of root and shoot metabolites as indicators for microbial community establishment In contrast with poplar root exudates, metabolites accumulated in roots and shoots of poplars cultivated in presence of microbes rather than in their absence at 30 days post-plantation. Whereas root microbial colonisation induced higher sucrose and P concentrations (but less glucose and fructose) in roots, amino acids accumulated to high concentrations in roots of plants cultivated in sterilised soils. At that time-point, roots systems were mainly colonised by EMF, known to transfer N to the plant and to receive C in return . These data support C exchange from sucrose breakdown to the microbes. Interestingly, sucrose was positively correlated with the presence of three fungi in the root: the EMF Inocybe , the endophyte Hyaloscypha and a microbe described as a saprotroph, suggesting that they may be greater consumers of sucrose produced by the plant. On the other hand, the lower concentrations of amino acids in roots colonised by microbes were not expected. It can be hypothesised that amino acids are directly assimilated into proteins to sustain the increased metabolic processes in the presence of microbes, explaining the lack of amino acids in our samples. Root microbial colonisation also strongly remodelled the poplar leaf defence compounds, as previously shown in other studies . The levels of defence compounds were mainly correlated with the bacterial genera unlike fungi, suggesting that the modified niche (by metabolite changes) could be a trigger for the selection of bacteria colonising the leaves from the root system. Alternatively, bacteria may be the main trigger for the remodelling of leaf defence. Colonisation of the poplar tissues by soil-borne microorganisms was very rapid, with both fungi and bacteria being detected on the roots as early as 24 h after the poplars were planted, and in the shoots shortly thereafter. The structuring of the microbial communities followed a two-step process for both root and aerial tissues in which an early-stage community, dominated by endophytes and saprophytes, rapidly colonised the tissues and was later replaced by a more stable community of symbionts. While the early-stage community of early colonisers was quite similar between roots and shoots, the late-stage communities were clearly differentiated between the roots and the shoots. However, dominant members of the shoot microbiota at the late time point were transiently detected at earlier time points in the rhizosphere and in the roots, suggesting that their first chemoattraction was in the rhizosphere and then followed by their transit through the roots to the shoots. It is noteworthy that levels of tartaric acid and glyceric acid in the root exudates followed the same trend of a decline over time similar to the bacterial taxa that were transiently detected, suggesting that they may act as chemoattractants in the rhizosphere. Two main horizontal routes of colonisation can be envisaged for the phyllosphere: airborne microorganisms and those from insect carriers that land on leaves and form the epiphytic microbiota, including microorganisms that penetrate the leaf endosphere through stomata and wounds, versus microorganisms that travel from the soil through the roots and to the stems , but the relative importance of the two routes is uncertain. Our data suggest that the soil may be an important reservoir of microorganisms for the colonisation of aerial tissues of P. tremula x tremuloides by both fungi and bacteria. This is in agreement with previous studies on grapes , A. thaliana and rice . The mesocosm device used in this study, which is sealed and only allows gas exchanges, restricted the source of microorganisms that can colonise the phyllosphere to the soil, and it remains to be determined how significant is the airborne route in counteracting the soil reservoir. Although we cannot rule out sporulation by soil microorganisms, it is unlikely that such a phenomenon was involved in this case, given that the first communities colonising the shoots after 4 days were very similar to those colonising the roots, suggesting instead a transition via the roots. Nevertheless, the dominant taxa found in the shoots in our study including the fungus, Ilyonectria and the bacterial genera, Pseudomonas , Burkholderia and Mucilaginibacter . These microbes are typically found in the phyllosphere of various trees and plants , suggesting that our observations are not an artefact and that these microorganisms can colonise shoot tissues from the soil. However, it remains to be determined whether they migrate to the shoots via the surface (epiphytic) or within the tissues (endophytic). The colonisation of roots and shoots in two waves is reminiscent of what we have recently described for roots of P. tremula x alba 717-1B4 . In both studies, an early, massive colonisation of the roots by the endophyte Mortierella was observed but in contrast with our previous experiment, the saprophytes Umbelopsis and Saitozoma , while being abundant in the soil and in the rhizosphere, did not colonise the roots of P. tremula x tremuloides T89, suggesting potential genotype-specific responses. Nevertheless, the replacement of Mortierella by other endophytes and EMF in both poplar species, and in both roots and shoots over time, is noteworthy. It may be hypothesised that fast-growing species such as Mortierella are quicker to colonise the host, but then compete with niche specialists such as EMF and endophytes, or are excluded by the host. However, Mortierella has been regularly isolated as a poplar endophyte and has even been shown to have plant growth-promoting properties , suggesting that it has the ability to establish in poplar tissues. Alternatively, the fungus may remain in the tissues but at a low level of abundance compared to other fungi and thus stay hidden until the death of the tissues where it is also often detected in the early stage of decay . Specific monitoring using quantitative PCR and metatranscriptomics would be necessary to elucidate the behaviour of this ubiquitous fungus. The peculiar case of AMF as a stable community over time Unlike other types of fungi, the composition of the AMF community remained stable over the course of the experiment once established in the roots. We previously demonstrated using Confocal Laser Scanning Microscopy that AMF establish symbiotic associations with P. tremula x alba 717-1B4 roots within 10 days, but we were unable to definitely identify the fungal species by metabarcoding . Bonito et al. also reported that classical ITS and 18S metabarcoding methods were not able to characterise the Glomeromycete community in poplar roots although these fungi are well known to colonise poplar roots . To circumvent this problem, the nested PCR method developed by Brígido et al. was used and captured in detail the composition of AMF communities in roots of P. tremula x tremuloides T89, for the first time using high-throughput sequencing. We demonstrate that several species belonging to the genera Rhizophagus and Glomus can colonise a single root system at the same time, unlike Acaulospora and Claroideoglomus that were only retrieved from soil. Such a pattern is in accordance with previous studies using regular Sanger sequencing identification methods . It is generally considered that AMF dominate in roots at the juvenile stage of life of poplars and they are then replaced by EMF , and that environmental factors can influence the balance between EMF and AM . Our data indicate that AM and EMF can together colonise naive root systems and coexist, even when the EMF strongly expand.
Populus tremula x tremuloides root exudates from young cuttings Limited information exists regarding the composition of poplar root exudates. A study by Li et al. endeavoured to analyse the root exudates of four poplar species, establishing correlations between poplar root exudate metabolites and the predominant bacterial taxa in the rhizosphere. However, this study primarily focused on collecting rhizospheric soil metabolomes rather than authentic root exudates, complicating direct comparisons with our research. In our study, we investigated the root exudates of Populus tremula x tremuloides T89, revealing a rich composition encompassing sugars, organic acids, glucosides and various phenolic compounds, including flavonoids and lipid-related metabolites. This composition aligns with existing research on non-perennial species, confirming poplar root exudates as a carbon-rich environment, which likely serve as nutrients to feed soil microbes and then participate in the soil C-cycle . As well as being a source of nutrients, root exudates also contain secondary metabolites, notably phenolic compounds, capable of regulating the growth of microorganisms . More specifically to poplars, salicylates (tremuloidin, salicin, populin, salicylic acid) and their derivatives were identified within the root exudates. Although salicylic acid (SA) has been detected in the root exudates of several non-perennial species , concentrations and diversity of salicylates tend to be lower in the root exudates of other plants. These compounds exhibit multifaceted functions in root exudates acting as deterrents at low concentrations for microorganisms , while attracting saprotrophs capable of breaking them down , or participating in phosphate solubilisation in soil . In Arabidopsis , the abundance of some bacterial communities is impacted in response to SA signalling, and it is partly explained, in part, by the use of SA as a C source for bacterial growth or as an immune signal . Clocchiatti et al. demonstrated that the combination of SA acid and primary metabolites induces a shift in the balance between fungi and bacteria, favouring the growth of saprotrophic fungi. Finding the same compounds in poplar root exudates led us to propose that salicylate compounds not only play a role for selection of endosphere microbiota, as notably suggested by Veach et al. , but also contribute to the selection of microbes from the soil reservoir.
Interestingly, our study indicated a lipid-related signature for poplar root exudates and roots. Whereas mainly fatty acids were detected in poplar root exudates, sterols and specific fatty acids were also identified in poplar roots. Fatty acids in root exudates were specifically detected from poplars cultivated in absence of microbes. This is consistent with a possible role of these fatty acids in plant root-microbe signalling . Fatty acids from Pinus sylvestris root exudates have been previously shown to stimulate the growth of the EMF Laccaria and Leccinum . Additionally, the observed consumption of fatty acids at 30 days is consistent with the detected presence of arbuscular mycorrhizal fungi ( Rhizophagus , Glomus OTUs). Indeed, several studies demonstrated lipid transfer from the plant to the AM-fungi in the form of monoacylglycerol containing C16-fatty acids . In contrast, root microbial colonisation induced higher concentration of phytosterols (campsterol,stigmasterol; β-sitosterol) and fatty acids (stearic acid, α-linoleic acid, linoleic acid) in roots. These accumulations likely support the demand for plant membrane remodelling to sustain microbial colonisation, in particular for arbuscule formation by AMF, but also for other types of endophytic fungi . Because we cannot distinguish the microbial or plant origin of the fatty acids, we can also hypothesise that their higher concentration reflects the intense growth of fungi inside the roots.
Massive differences between root exudates from poplar trees grown in the presence or absence of microbes were detected as early as 4 days after planting. Lipid-related metabolites, sugars, organic acids, amino acids, salicylates and derivatives were greatly depleted in presence of microbes as early as 4 days post plantation. At this time-point, mainly bacteria and saprotrophic fungi were colonising the rhizosphere and the roots, which argues in favour of an important role of bacteria and saprotrophic fungi in root exudate consumption, including fatty acids and defence compounds. The degradation of these defence compounds may later facilitate the development of sensitive microorganisms that could not have developed in the presence of phenolics . At later time points (T15 and T30), the same trend for metabolite depletion of root exudates in presence of microbes was found. The question remains whether soil microbes consumed poplar root exudates as nutrients sources or to degrade toxic defence compounds, allowing the entry of endophytic microbes. Another non-exclusive hypothesis to explain the reduction of exudates production triggered by microbial colonisation is the negative self-regulation of the plant metabolism. However, all studies performed so far demonstrated that the presence of microorganisms promoted either the root exudation as compared to axenic solutions or chemical changes in root exudates . Downregulation of root exudation by poplars following microbial colonisation would therefore be a new and hitherto undescribed behaviour. However, this is reminiscent of the “cry for help” concept supported by different studies showing that in response to biotic stress, plants attract beneficial microorganisms by modifying root exudation . The absence of microorganisms is in fact an abnormal situation for the plant and could be potentially perceived as a stress. Further experiments will be needed to disentangle between these two non-exclusive hypotheses: microbial consumption or negative self-regulation? Yet, the presence of organic acids, such as glyceric acid and tartaric acid, were positively correlated with the relative abundance of early-stage bacterial taxa including Pseudomonas , Burkholderia and Mucilaginibacter . This correlation may be due either to the production of these organic acids by the bacteria, or to their production by the trees to serve as chemoattractants for bacteria that would thus be selected from the rhizosphere reservoir . Interestingly, late-stage bacterial taxa (e.g. Ktedonobacteraceae JG30a , Bryobacter ) were positively correlated with glycerol but negatively correlated with glyceric acid. Given that glyceric acid is primarily derived from the microbial oxidation of glycerol , metabolite turnover represents a selection process either acting as a repellent or attractant depending on the bacterial taxa. As expected, poplar root exudates contained low-molecular-weight carboxylates, such as malic acid, citric acid and aconitic acid. These compounds were enriched in root exudates of poplars cultivated in sterilised soils. This is consistent with the mechanism of plants increasing P-uptake by secreting carboxylates that can displace immobilised P from inorganic and organic soil compounds . The levels of these carboxylates were lower in root exudates from poplars cultivated in the presence of microbes, whereas the amount of oxalic acid was higher in roots of the same poplars. Oxalic acid production may also be of fungal origin. It could serve as a signal molecule for the mycophagous bacterium Collimonas , which has been detected in the roots of poplar in isolated cases . Taken together, these data suggest that P-mobilisation by plant-produced organic acids is mainly occurring in absence of microbes associated with the root systems. It can be hypothesised that organic acids have a dual role: P-scavenging in absence of microbes and a C source for microbes.
In contrast with poplar root exudates, metabolites accumulated in roots and shoots of poplars cultivated in presence of microbes rather than in their absence at 30 days post-plantation. Whereas root microbial colonisation induced higher sucrose and P concentrations (but less glucose and fructose) in roots, amino acids accumulated to high concentrations in roots of plants cultivated in sterilised soils. At that time-point, roots systems were mainly colonised by EMF, known to transfer N to the plant and to receive C in return . These data support C exchange from sucrose breakdown to the microbes. Interestingly, sucrose was positively correlated with the presence of three fungi in the root: the EMF Inocybe , the endophyte Hyaloscypha and a microbe described as a saprotroph, suggesting that they may be greater consumers of sucrose produced by the plant. On the other hand, the lower concentrations of amino acids in roots colonised by microbes were not expected. It can be hypothesised that amino acids are directly assimilated into proteins to sustain the increased metabolic processes in the presence of microbes, explaining the lack of amino acids in our samples. Root microbial colonisation also strongly remodelled the poplar leaf defence compounds, as previously shown in other studies . The levels of defence compounds were mainly correlated with the bacterial genera unlike fungi, suggesting that the modified niche (by metabolite changes) could be a trigger for the selection of bacteria colonising the leaves from the root system. Alternatively, bacteria may be the main trigger for the remodelling of leaf defence. Colonisation of the poplar tissues by soil-borne microorganisms was very rapid, with both fungi and bacteria being detected on the roots as early as 24 h after the poplars were planted, and in the shoots shortly thereafter. The structuring of the microbial communities followed a two-step process for both root and aerial tissues in which an early-stage community, dominated by endophytes and saprophytes, rapidly colonised the tissues and was later replaced by a more stable community of symbionts. While the early-stage community of early colonisers was quite similar between roots and shoots, the late-stage communities were clearly differentiated between the roots and the shoots. However, dominant members of the shoot microbiota at the late time point were transiently detected at earlier time points in the rhizosphere and in the roots, suggesting that their first chemoattraction was in the rhizosphere and then followed by their transit through the roots to the shoots. It is noteworthy that levels of tartaric acid and glyceric acid in the root exudates followed the same trend of a decline over time similar to the bacterial taxa that were transiently detected, suggesting that they may act as chemoattractants in the rhizosphere. Two main horizontal routes of colonisation can be envisaged for the phyllosphere: airborne microorganisms and those from insect carriers that land on leaves and form the epiphytic microbiota, including microorganisms that penetrate the leaf endosphere through stomata and wounds, versus microorganisms that travel from the soil through the roots and to the stems , but the relative importance of the two routes is uncertain. Our data suggest that the soil may be an important reservoir of microorganisms for the colonisation of aerial tissues of P. tremula x tremuloides by both fungi and bacteria. This is in agreement with previous studies on grapes , A. thaliana and rice . The mesocosm device used in this study, which is sealed and only allows gas exchanges, restricted the source of microorganisms that can colonise the phyllosphere to the soil, and it remains to be determined how significant is the airborne route in counteracting the soil reservoir. Although we cannot rule out sporulation by soil microorganisms, it is unlikely that such a phenomenon was involved in this case, given that the first communities colonising the shoots after 4 days were very similar to those colonising the roots, suggesting instead a transition via the roots. Nevertheless, the dominant taxa found in the shoots in our study including the fungus, Ilyonectria and the bacterial genera, Pseudomonas , Burkholderia and Mucilaginibacter . These microbes are typically found in the phyllosphere of various trees and plants , suggesting that our observations are not an artefact and that these microorganisms can colonise shoot tissues from the soil. However, it remains to be determined whether they migrate to the shoots via the surface (epiphytic) or within the tissues (endophytic). The colonisation of roots and shoots in two waves is reminiscent of what we have recently described for roots of P. tremula x alba 717-1B4 . In both studies, an early, massive colonisation of the roots by the endophyte Mortierella was observed but in contrast with our previous experiment, the saprophytes Umbelopsis and Saitozoma , while being abundant in the soil and in the rhizosphere, did not colonise the roots of P. tremula x tremuloides T89, suggesting potential genotype-specific responses. Nevertheless, the replacement of Mortierella by other endophytes and EMF in both poplar species, and in both roots and shoots over time, is noteworthy. It may be hypothesised that fast-growing species such as Mortierella are quicker to colonise the host, but then compete with niche specialists such as EMF and endophytes, or are excluded by the host. However, Mortierella has been regularly isolated as a poplar endophyte and has even been shown to have plant growth-promoting properties , suggesting that it has the ability to establish in poplar tissues. Alternatively, the fungus may remain in the tissues but at a low level of abundance compared to other fungi and thus stay hidden until the death of the tissues where it is also often detected in the early stage of decay . Specific monitoring using quantitative PCR and metatranscriptomics would be necessary to elucidate the behaviour of this ubiquitous fungus.
Unlike other types of fungi, the composition of the AMF community remained stable over the course of the experiment once established in the roots. We previously demonstrated using Confocal Laser Scanning Microscopy that AMF establish symbiotic associations with P. tremula x alba 717-1B4 roots within 10 days, but we were unable to definitely identify the fungal species by metabarcoding . Bonito et al. also reported that classical ITS and 18S metabarcoding methods were not able to characterise the Glomeromycete community in poplar roots although these fungi are well known to colonise poplar roots . To circumvent this problem, the nested PCR method developed by Brígido et al. was used and captured in detail the composition of AMF communities in roots of P. tremula x tremuloides T89, for the first time using high-throughput sequencing. We demonstrate that several species belonging to the genera Rhizophagus and Glomus can colonise a single root system at the same time, unlike Acaulospora and Claroideoglomus that were only retrieved from soil. Such a pattern is in accordance with previous studies using regular Sanger sequencing identification methods . It is generally considered that AMF dominate in roots at the juvenile stage of life of poplars and they are then replaced by EMF , and that environmental factors can influence the balance between EMF and AM . Our data indicate that AM and EMF can together colonise naive root systems and coexist, even when the EMF strongly expand.
In this work, we showed that microbial colonisation triggered rapid and massive changes in the quality and quantity of poplar root exudates and led to a strong alteration of the root and shoot metabolomes. Furthermore, we demonstrated that the assembly of microbial communities in both belowground and aboveground habitats is highly dynamic involving successional waves of colonisation. Our investigation reveals a close relationship between fungal communities establishing in the roots and shoots during the early stages of colonisation, with subsequent differentiation in the later stages. These findings support the transition of microorganisms from below to the aboveground compartments, followed by the fine-tuned selection of the host resulting in the assembly of specific communities among the plant habitats, although we observed the presence of a core microbiota colonising both niches. Poplars are unique among temperate forest trees, firstly because of their particular metabolism of salicylates, and secondly because of the double colonisation of their roots by AMF and EMF and the high abundance of endophytes in their roots. It would therefore be very interesting in the future to determine whether our results apply only to the Salicaceae family or whether they are more generic to trees.
Biological material In order to decipher the dynamics of poplar microbiota establishment of fungal and bacterial communities between aboveground and belowground compartments, poplar, Populus tremula x tremuloides T89, was cultivated in vitro sterile conditions on Murashige and Shoog (MS) (2.2 g MS salts including vitamins, Duchefa; 0.8% Phytagel and 2% sucrose). Poplar cuttings were cultivated at 24 °C in a growth chamber (photoperiod, 16 h day; light intensity, 150 μmol.m −2 .s −1 ) on MS supplemented with indole-3-butyric acid (IBA) (2 mg.L −1 ) for 1 week before being transferred on MS for 2 weeks until root development. This development growth protocol was used for all experiments. Soil collection and sterilisation by Gamma irradiation To obtain a forest-like microbial inoculum, the topsoil horizon (0 to 20 cm) of a Populus trichocarpa x deltoides plantation located in Champenoux, France (48° 519,460 N, 2° 179,150 E), was collected over an area of 1 m 2 under 5 different trees. Soil was dried at room temperature and sieved at 2-mm-diameter pore size before being further used. Three subsets of 20 g were stored at − 80 °C until further soil physico-chemical property analyses. In order to decipher the influence of microbial communities on poplar root exudates and metabolomes, a subset of 50 kg of soil was sterilised by gamma irradiation (45–65 kGy, Ionisos, France). The soil was packaged in individual plastic bags containing 200 g of soil prior to gamma irradiation. The sterilised (gamma irradiated) soil was stored for 3 months at room temperature in the dark before being used to allow outgassing of potentially toxic volatile compounds. Three additional subsets of 20 g of gamma sterilised soil were stored at − 80 °C until further soil physico-chemical property analyses. Soil physico-chemical properties Soil analyses were performed using the LAS (Laboratoire d’Analyses des Sols) technical platform of soil analyses at INRAe Arras, according to standard procedures, detailed online ( https://www6.hautsdefrance.inra.fr/las/Methodes-d-analyse ). Exchangeable cations were extracted in either 1 M KCl (magnesium, calcium, sodium, iron, manganese) or 1 M NH 4 Cl (potassium) and determined by ICP-AES (JY180 ULTRACE). The 1 M KCl extract was also titrated using an automatic titrimeter (Mettler TS2DL25) to assess exchangeable H + and aluminium cations (Al 3+ ). Total carbon, nitrogen and phosphorus contents were obtained after combustion at 1000 °C and were determined using a Thermo Quest Type NCS 2500 analyser. The pH of the soil samples was measured in water at a soil to solution ratio of 1:2 (pH metre Mettler TSDL25). Exchangeable acidity was calculated by taking the sum of H + and Al 3+ . The cation-exchange capacity (CEC) was calculated by taking the sum of both extracted exchangeable base cations and exchangeable acidity. Results are compiled in Table S1. Plant growth and sampling procedure To investigate the dynamics of colonisation of naive poplars by microbial communities, 200 g of soil, either natural or sterilised, was distributed into 1500-cm 3 boxes closed with filtered lids to allow gas exchange but not the entry of external microorganisms. In this way, only the microorganisms present in the natural soil can colonise the poplars, while the gamma irradiated soil remains sterile throughout the experiment. All manipulations were performed under sterile hoods. Before launching the experiment, we calculated the weight of the pot corresponding to 100% humidity (field capacity), and then deduced the weight of the pot for 75% humidity. During the experiment, soil was maintained at 75% humidity by regularly weighting the pots, and adding the corresponding missing volume of sterile water under sterile conditions. Two uniform in vitro seedlings (1 cm long for shoots and 1–2-cm-long roots) were transferred to each pot containing the environmental soil, described above. Each pot was enclosed with a filtered cover allowing gas exchange, and the bottom was covered (approximately 1/3 of the pot) with aluminium foil to prevent algal and moss development. Plants were cultivated at 24 °C in a growth chamber under the same conditions described above (photoperiod, 16 h day; light intensity, 150 μmol.m −2 .s −1 ). In total, 100 plants distributed among 50 pots were grown over 1, 4, 15 and 30 days. Regarding microbial community analyses, at each time point, bulk soil, rhizosphere (except at T1, where no adherent soil was observed), root and shoot samples from 5 plants, corresponding to 5 replicates, were collected. The shoots and roots were separated and weighed, and the rhizosphere was collected by pouring the root systems with adherent soil in 15-mL falcon tubes containing 2 mL sterile 1X phosphate-buffered saline (PBS: 0.13 M NaCl, 7 mM Na 2 HPO 4 , 3 mM NaH 2 PO 4 [pH 7.2]). After removing the root systems, the samples were briefly vortexed in the falcon tubes containing the rhizosphere and centrifuged for 10 min at 4000 rpm. Then, the supernatant was removed to only retain the rhizosphere samples. Finally, the roots were washed in sterile water to remove remaining soil particles. Soil, rhizosphere, shoot and root samples were frozen in liquid nitrogen and stored at − 80 °C until DNA extraction. In vitro poplars were also harvested to confirm their axenic status prior to planting (time point T0). We analysed the metabolite composition for both shoot and root habitats after 30 days of growth by harvesting 25 seedlings. Shoots and roots of each poplar were freeze dried in liquid nitrogen, stored at − 80 °C, and later lyophilised. Samples were then pooled to obtain between 9 to 15 replicates of dry material ranging between 25 and 100 mg for both organs. The dry shoot and root material were ground using metal beads and tissue-lyzer before sample extraction and analysis of their metabolomic composition by GC–MS. In addition, we followed the exudates composition from 4 to 30 days of growth. Root systems of plantlets were left to exude in hydroponic solution for 4 h. Root exudates were collected for GC–MS analysis and filtered using Acrodisc® 25-mm syringe filters with 0.2-µm WWPTFE membrane (Pall Lab). Root exudates were purified using Sep-Pak C18 cartridges (Waters™) in order to remove salts contained in the hydroponic solution. Briefly, the column was conditioned by loading 700 μl (one volume) 7 times with 100% acetonitrile. The column was then equilibrated with 7 volumes of H 2 O before loading 2 ml of exudate. The columns were washed with 5 volumes of water and eluted in three steps; with an acetonitrile gradient ranging from 20, 50 and 100%. Finally, the lyophilised root exudates were weighed, and their metabolomic profile analysed by GC–MS. Microbial community analyses To investigate the establishment of microbial communities in distinct organs of axenic poplar Populus tremula x tremuloides T89, bulk soil, rhizosphere, roots and shoots were sampled after 1, 4, 15 and 30 days of growth. For soil and rhizosphere, DNA was extracted from 250 mg of material using DNeasy PowerSoil kit using the protocol provided by the manufacturer (Qiagen). For root and shoot samples, 50 mg of ground plant material (less than 50 mg for root systems at T0, T1 and T4) were used to extract DNA using DNeasy Plant Mini kit following the manufacturer protocol (Qiagen). DNA concentration was quantified using a NanoDrop 1000 spectrophotometer (ThermoFisher) and DNA extraction was normalised to the final concentration of 10 ng.µL −1 for soil and rhizosphere samples and 5 ng.µL −1 for root and shoot samples. To maximise the coverage of bacterial 16S rRNA and fungal ITS2 rRNA regions, a mix of forward and reverse primers was used as previously described . Regarding bacterial communities, a combination of 4 forward and 2 reverse primers in equal concentration (Table S5) was used, targeting the V4 region of the 16S rRNA. For fungal communities, 6 forward primers and one reverse primer in equal concentration were used, targeting the ITS2 rRNA region (Table S5). To avoid the amplification of plant material, a mixture of peptide nucleic acid (PNA) probes , inhibiting the poplar mitochondrial (mPNA) and chloroplast DNA (pPNA) for 16S libraries, and a third mix of PNA blocking the poplar ITS rRNA (itsPNA) were used (Eurogentec). Regarding AMF, a two-step PCR procedure to amplify the large ribosomal subunit (LSU) DNA was used, following to the protocol of Brígido et al. . The specific primers LR1 and NDL22 were used in the first PCR, whereas the primers FRL3 and FRL4 were used to amplify the LSU-D2 rRNA genes of AMF in the second PCR (Table S5). All primers used to generate the microbial libraries (16S, ITS and 28S) contained an extension used in PCR2 for the tagging with specific sequences to allow the future identification of each sample. As well, PCR-s were prepared without addition of fungal DNA (negative control) and on known fungal and bacterial communities (mock communities) as quality controls. The amplicons were visualised by electrophoresis through a 1% agarose gel in 1X TBE buffer. PCR products were purified using the Agencourt AMPure XP PCR purification kit (Beckman Coulter), following the manufacturer protocol. After DNA purification, PCR products were quantified with a Qubit®2.0 fluorometer (Invitrogen) and new PCRs performed for samples with concentration lower than 2.5 ng.µL −1 . Samples with DNA concentration higher than 2.5 ng.µl −1 were sent for tagging (PCR2) and MiSeq Illumina next-generation sequencing (GenoScreen for ITS and 28S, PGTB INRAE for 16S). Sequence processing After sequences demultiplexing and barcode removal, fungal, bacterial and glomerales sequences were processed using FROGS (Find Rapidly OTU with Galaxy Solution) implemented on the Galaxy analysis platform . Sequences were clustered into OTUs based on the iterative Swarm algorithm, and then chimaeras and fungal phiX contaminants were removed. As suggested by Escudié and collaborators , OTUs with a number of reads lower than 5.10 −5 % of total abundance, and not present in at least 3 samples, were removed. Fungal sequences not assigned to the ITS region using the ITSx filter implemented in FROGS were then discarded and fungal sequences were affiliated using the UNITE Fungal Database v.8.3 , the bacterial sequences using SILVA database v.138.1 and 28S glomerales sequences using MaarjAM database . OTUs with a BLAST identity lower than 90% and BLAST coverage lower than 95% were considered as chimaeras and removed from the dataset. Additionally, sequences affiliated with chloroplasts and mitochondria were removed. In order to achieve an equal number of reads in all samples, the rarefy_even_depth function from Phyloseq package in R. To optimise the analyses of fungal community structures and diversity, a different rarefaction threshold was applied depending on microbial communities. We rarefied bacterial communities with a number of sequences to 4377, 5139 for fungal communities (ITS) and 6831 for 28S communities. FUNGuild and FungalTraits were combined to classify each fungal OTU into an ecological trophic guild. A confidence threshold was applied to only keep “highly probable” and “probable” affiliated trophic guilds and the other OTUs were assigned as “unidentified”. Metabolite profiling Untargeted metabolite levels were determined from lyophilised roots and shoots as described in Tschaplinski et al. . To ensure complete extraction, freeze-dried, powdered material (~ 25 mg for shoot samples and 30 mg for root samples) was twice extracted overnight with 2.5 mL of 80% ethanol (aqueous) (Decon Labs,#2701), sorbitol (75 µL (L) or 50 µL (R and Myc) of a 1 mg/mL aqueous solution, Sigma-Aldrich; S1876) was added to the first extract as an internal standard to correct for subsequent differences in derivatisation efficiency and changes in sample volume during heating. The extracts were combined, and 500-µL (L) or 2-mL (R and Myc) aliquots were dried under nitrogen. Metabolites were silylated to produce trimethylsilyl derivatives by adding 500 µL of silylation-grade acetonitrile (Thermo Scientific; TS20062) to the dried extracts followed by 500 µL of N-methyl-N-trimethylsilyltrifluoroacetamide with 1% trimethylchlorosilane (Thermo Scientific; TS48915) and heating for 1 h at 70 °C. For lyophilised root exudates, sorbitol (10 µL; 1 mg*ml −1 ) was added as internal standard prior to drying under nitrogen and silylating as described above but using 200 µL of each silylation solvent and reagent. After 2 days, a 1-µL aliquot was injected into an Agilent Technologies 7890A/5975C inert XL gas chromatograph / mass spectrometer (MS) configured as previously described . The MS was operated in electron impact (70 eV) ionisation mode using a scan range of 50–650 Da. Metabolite peaks were quantified by area integration by extracting a characteristic mass-to-charge (m/z) fragment with peaks scaled back to the total ion chromatogram using predetermined scaling factors and normalised to the extracted mass, the recovered internal standard, the analysed volume and the injection volume. The peaks were identified using a large in-house user-defined database of ~ 2700 metabolite signatures of trimethylsilyl-derivatised metabolites and the Wiley Registry 12th Edition combined with NIST 2020 mass spectral database. The combination of these databases allowed accurate identification of a large fraction of the observed metabolites. Unknowns were designated by their retention time (min) and key m/z. The assignation of the distinct metabolic pathways was performed using the Kyoto Encyclopedia of Genes and Genomes database (KEGG) conjointly with the Plant Metabolic Network (PMN) focusing on Populus trichocarpa ( https://pmn.plantcyc.org/POPLAR ).
In order to decipher the dynamics of poplar microbiota establishment of fungal and bacterial communities between aboveground and belowground compartments, poplar, Populus tremula x tremuloides T89, was cultivated in vitro sterile conditions on Murashige and Shoog (MS) (2.2 g MS salts including vitamins, Duchefa; 0.8% Phytagel and 2% sucrose). Poplar cuttings were cultivated at 24 °C in a growth chamber (photoperiod, 16 h day; light intensity, 150 μmol.m −2 .s −1 ) on MS supplemented with indole-3-butyric acid (IBA) (2 mg.L −1 ) for 1 week before being transferred on MS for 2 weeks until root development. This development growth protocol was used for all experiments.
To obtain a forest-like microbial inoculum, the topsoil horizon (0 to 20 cm) of a Populus trichocarpa x deltoides plantation located in Champenoux, France (48° 519,460 N, 2° 179,150 E), was collected over an area of 1 m 2 under 5 different trees. Soil was dried at room temperature and sieved at 2-mm-diameter pore size before being further used. Three subsets of 20 g were stored at − 80 °C until further soil physico-chemical property analyses. In order to decipher the influence of microbial communities on poplar root exudates and metabolomes, a subset of 50 kg of soil was sterilised by gamma irradiation (45–65 kGy, Ionisos, France). The soil was packaged in individual plastic bags containing 200 g of soil prior to gamma irradiation. The sterilised (gamma irradiated) soil was stored for 3 months at room temperature in the dark before being used to allow outgassing of potentially toxic volatile compounds. Three additional subsets of 20 g of gamma sterilised soil were stored at − 80 °C until further soil physico-chemical property analyses.
Soil analyses were performed using the LAS (Laboratoire d’Analyses des Sols) technical platform of soil analyses at INRAe Arras, according to standard procedures, detailed online ( https://www6.hautsdefrance.inra.fr/las/Methodes-d-analyse ). Exchangeable cations were extracted in either 1 M KCl (magnesium, calcium, sodium, iron, manganese) or 1 M NH 4 Cl (potassium) and determined by ICP-AES (JY180 ULTRACE). The 1 M KCl extract was also titrated using an automatic titrimeter (Mettler TS2DL25) to assess exchangeable H + and aluminium cations (Al 3+ ). Total carbon, nitrogen and phosphorus contents were obtained after combustion at 1000 °C and were determined using a Thermo Quest Type NCS 2500 analyser. The pH of the soil samples was measured in water at a soil to solution ratio of 1:2 (pH metre Mettler TSDL25). Exchangeable acidity was calculated by taking the sum of H + and Al 3+ . The cation-exchange capacity (CEC) was calculated by taking the sum of both extracted exchangeable base cations and exchangeable acidity. Results are compiled in Table S1.
To investigate the dynamics of colonisation of naive poplars by microbial communities, 200 g of soil, either natural or sterilised, was distributed into 1500-cm 3 boxes closed with filtered lids to allow gas exchange but not the entry of external microorganisms. In this way, only the microorganisms present in the natural soil can colonise the poplars, while the gamma irradiated soil remains sterile throughout the experiment. All manipulations were performed under sterile hoods. Before launching the experiment, we calculated the weight of the pot corresponding to 100% humidity (field capacity), and then deduced the weight of the pot for 75% humidity. During the experiment, soil was maintained at 75% humidity by regularly weighting the pots, and adding the corresponding missing volume of sterile water under sterile conditions. Two uniform in vitro seedlings (1 cm long for shoots and 1–2-cm-long roots) were transferred to each pot containing the environmental soil, described above. Each pot was enclosed with a filtered cover allowing gas exchange, and the bottom was covered (approximately 1/3 of the pot) with aluminium foil to prevent algal and moss development. Plants were cultivated at 24 °C in a growth chamber under the same conditions described above (photoperiod, 16 h day; light intensity, 150 μmol.m −2 .s −1 ). In total, 100 plants distributed among 50 pots were grown over 1, 4, 15 and 30 days. Regarding microbial community analyses, at each time point, bulk soil, rhizosphere (except at T1, where no adherent soil was observed), root and shoot samples from 5 plants, corresponding to 5 replicates, were collected. The shoots and roots were separated and weighed, and the rhizosphere was collected by pouring the root systems with adherent soil in 15-mL falcon tubes containing 2 mL sterile 1X phosphate-buffered saline (PBS: 0.13 M NaCl, 7 mM Na 2 HPO 4 , 3 mM NaH 2 PO 4 [pH 7.2]). After removing the root systems, the samples were briefly vortexed in the falcon tubes containing the rhizosphere and centrifuged for 10 min at 4000 rpm. Then, the supernatant was removed to only retain the rhizosphere samples. Finally, the roots were washed in sterile water to remove remaining soil particles. Soil, rhizosphere, shoot and root samples were frozen in liquid nitrogen and stored at − 80 °C until DNA extraction. In vitro poplars were also harvested to confirm their axenic status prior to planting (time point T0). We analysed the metabolite composition for both shoot and root habitats after 30 days of growth by harvesting 25 seedlings. Shoots and roots of each poplar were freeze dried in liquid nitrogen, stored at − 80 °C, and later lyophilised. Samples were then pooled to obtain between 9 to 15 replicates of dry material ranging between 25 and 100 mg for both organs. The dry shoot and root material were ground using metal beads and tissue-lyzer before sample extraction and analysis of their metabolomic composition by GC–MS. In addition, we followed the exudates composition from 4 to 30 days of growth. Root systems of plantlets were left to exude in hydroponic solution for 4 h. Root exudates were collected for GC–MS analysis and filtered using Acrodisc® 25-mm syringe filters with 0.2-µm WWPTFE membrane (Pall Lab). Root exudates were purified using Sep-Pak C18 cartridges (Waters™) in order to remove salts contained in the hydroponic solution. Briefly, the column was conditioned by loading 700 μl (one volume) 7 times with 100% acetonitrile. The column was then equilibrated with 7 volumes of H 2 O before loading 2 ml of exudate. The columns were washed with 5 volumes of water and eluted in three steps; with an acetonitrile gradient ranging from 20, 50 and 100%. Finally, the lyophilised root exudates were weighed, and their metabolomic profile analysed by GC–MS.
To investigate the establishment of microbial communities in distinct organs of axenic poplar Populus tremula x tremuloides T89, bulk soil, rhizosphere, roots and shoots were sampled after 1, 4, 15 and 30 days of growth. For soil and rhizosphere, DNA was extracted from 250 mg of material using DNeasy PowerSoil kit using the protocol provided by the manufacturer (Qiagen). For root and shoot samples, 50 mg of ground plant material (less than 50 mg for root systems at T0, T1 and T4) were used to extract DNA using DNeasy Plant Mini kit following the manufacturer protocol (Qiagen). DNA concentration was quantified using a NanoDrop 1000 spectrophotometer (ThermoFisher) and DNA extraction was normalised to the final concentration of 10 ng.µL −1 for soil and rhizosphere samples and 5 ng.µL −1 for root and shoot samples. To maximise the coverage of bacterial 16S rRNA and fungal ITS2 rRNA regions, a mix of forward and reverse primers was used as previously described . Regarding bacterial communities, a combination of 4 forward and 2 reverse primers in equal concentration (Table S5) was used, targeting the V4 region of the 16S rRNA. For fungal communities, 6 forward primers and one reverse primer in equal concentration were used, targeting the ITS2 rRNA region (Table S5). To avoid the amplification of plant material, a mixture of peptide nucleic acid (PNA) probes , inhibiting the poplar mitochondrial (mPNA) and chloroplast DNA (pPNA) for 16S libraries, and a third mix of PNA blocking the poplar ITS rRNA (itsPNA) were used (Eurogentec). Regarding AMF, a two-step PCR procedure to amplify the large ribosomal subunit (LSU) DNA was used, following to the protocol of Brígido et al. . The specific primers LR1 and NDL22 were used in the first PCR, whereas the primers FRL3 and FRL4 were used to amplify the LSU-D2 rRNA genes of AMF in the second PCR (Table S5). All primers used to generate the microbial libraries (16S, ITS and 28S) contained an extension used in PCR2 for the tagging with specific sequences to allow the future identification of each sample. As well, PCR-s were prepared without addition of fungal DNA (negative control) and on known fungal and bacterial communities (mock communities) as quality controls. The amplicons were visualised by electrophoresis through a 1% agarose gel in 1X TBE buffer. PCR products were purified using the Agencourt AMPure XP PCR purification kit (Beckman Coulter), following the manufacturer protocol. After DNA purification, PCR products were quantified with a Qubit®2.0 fluorometer (Invitrogen) and new PCRs performed for samples with concentration lower than 2.5 ng.µL −1 . Samples with DNA concentration higher than 2.5 ng.µl −1 were sent for tagging (PCR2) and MiSeq Illumina next-generation sequencing (GenoScreen for ITS and 28S, PGTB INRAE for 16S).
After sequences demultiplexing and barcode removal, fungal, bacterial and glomerales sequences were processed using FROGS (Find Rapidly OTU with Galaxy Solution) implemented on the Galaxy analysis platform . Sequences were clustered into OTUs based on the iterative Swarm algorithm, and then chimaeras and fungal phiX contaminants were removed. As suggested by Escudié and collaborators , OTUs with a number of reads lower than 5.10 −5 % of total abundance, and not present in at least 3 samples, were removed. Fungal sequences not assigned to the ITS region using the ITSx filter implemented in FROGS were then discarded and fungal sequences were affiliated using the UNITE Fungal Database v.8.3 , the bacterial sequences using SILVA database v.138.1 and 28S glomerales sequences using MaarjAM database . OTUs with a BLAST identity lower than 90% and BLAST coverage lower than 95% were considered as chimaeras and removed from the dataset. Additionally, sequences affiliated with chloroplasts and mitochondria were removed. In order to achieve an equal number of reads in all samples, the rarefy_even_depth function from Phyloseq package in R. To optimise the analyses of fungal community structures and diversity, a different rarefaction threshold was applied depending on microbial communities. We rarefied bacterial communities with a number of sequences to 4377, 5139 for fungal communities (ITS) and 6831 for 28S communities. FUNGuild and FungalTraits were combined to classify each fungal OTU into an ecological trophic guild. A confidence threshold was applied to only keep “highly probable” and “probable” affiliated trophic guilds and the other OTUs were assigned as “unidentified”.
Untargeted metabolite levels were determined from lyophilised roots and shoots as described in Tschaplinski et al. . To ensure complete extraction, freeze-dried, powdered material (~ 25 mg for shoot samples and 30 mg for root samples) was twice extracted overnight with 2.5 mL of 80% ethanol (aqueous) (Decon Labs,#2701), sorbitol (75 µL (L) or 50 µL (R and Myc) of a 1 mg/mL aqueous solution, Sigma-Aldrich; S1876) was added to the first extract as an internal standard to correct for subsequent differences in derivatisation efficiency and changes in sample volume during heating. The extracts were combined, and 500-µL (L) or 2-mL (R and Myc) aliquots were dried under nitrogen. Metabolites were silylated to produce trimethylsilyl derivatives by adding 500 µL of silylation-grade acetonitrile (Thermo Scientific; TS20062) to the dried extracts followed by 500 µL of N-methyl-N-trimethylsilyltrifluoroacetamide with 1% trimethylchlorosilane (Thermo Scientific; TS48915) and heating for 1 h at 70 °C. For lyophilised root exudates, sorbitol (10 µL; 1 mg*ml −1 ) was added as internal standard prior to drying under nitrogen and silylating as described above but using 200 µL of each silylation solvent and reagent. After 2 days, a 1-µL aliquot was injected into an Agilent Technologies 7890A/5975C inert XL gas chromatograph / mass spectrometer (MS) configured as previously described . The MS was operated in electron impact (70 eV) ionisation mode using a scan range of 50–650 Da. Metabolite peaks were quantified by area integration by extracting a characteristic mass-to-charge (m/z) fragment with peaks scaled back to the total ion chromatogram using predetermined scaling factors and normalised to the extracted mass, the recovered internal standard, the analysed volume and the injection volume. The peaks were identified using a large in-house user-defined database of ~ 2700 metabolite signatures of trimethylsilyl-derivatised metabolites and the Wiley Registry 12th Edition combined with NIST 2020 mass spectral database. The combination of these databases allowed accurate identification of a large fraction of the observed metabolites. Unknowns were designated by their retention time (min) and key m/z. The assignation of the distinct metabolic pathways was performed using the Kyoto Encyclopedia of Genes and Genomes database (KEGG) conjointly with the Plant Metabolic Network (PMN) focusing on Populus trichocarpa ( https://pmn.plantcyc.org/POPLAR ).
All data analyses, statistics and data representation were computed on the R software version 4.3.0 (R Core Team. 2023) using RStudio version 2023.03.1 (RStudio Team, 2023), and all figures were created using ggplot2 v.3.4.2. Soil parameters were tested for normal distribution using Shapiro Wilk tests. If the data were normally distributed, the differences between the means were assessed using Student t tests followed by the Bonferroni correction; otherwise, Wilcoxon tests were used. The difference of root and shoot fresh weight between poplar grown on natural and sterilised soil was assessed using a Wilcoxon test followed by Bonferroni corrections. The dynamics of root exudation over time was assessed using a Kruskal–Wallis test with false discovery rate (FDR) corrections. The differences among sampling time were assessed with a Fisher LSD post hoc test. The differences of metabolites between natural and gamma irradiated soil among plant organs were assessed using a Wilcoxon test followed by FDR corrections. Finally, multiple regression using redundancy analysis (RDA, rda function in vegan package) were used for bacterial and fungal communities between root and shoot habitats and in the rhizosphere over time with plant metabolites and root exudates as explanatory variables. The significance of plant metabolites and root exudates and their correlations with microbial communities were assessed using the envfit function in vegan with 1000 permutations and applied FDR corrections. An ANOVA-like permutation test (function anova.cca in the vegan package with 1000 permutations) was then used to determine if RDA models were statistically significant. Differences in fungal and bacterial community structures between tissues and time were tested using permutational multivariate analysis of variance (PERMANOVA, adonis2 function in vegan package) based on Bray–Curtis and Jaccard distances, and differences in structures were visualised using a nonmetric dimensional scaling (NMDS) ordination. The significance of microbial communities and environmental variables and their correlations were calculated using the envfit function in vegan with 1000 permutations and applied FDR corrections. The difference of richness and diversity between genotypes over time was assessed using the Kruskal–Wallis test, with Bonferroni corrections followed by the Fisher LSD post hoc test. The difference of fungal and bacteria relative abundance at the Phylum and genera level between organs was tested using Kruskal–Wallis tests, with Bonferroni corrections for Phylum and FDR correction for genera. In order to reduce the weight of the correction on fungal genera, only fungal genera with a relative abundance higher than 1% were kept and a Kruskal–Wallis test was applied, followed by Bonferroni correction. The variations of fungal trophic guilds relative abundance were assessed by Kruskal–Wallis tests, followed by Bonferroni correction, while fungal diversity and richness were analysed using Kruskal–Wallis tests and LSD post hoc tests.
Supplementary Material 1: Figure S1 . Experimental design of the study. Poplar cuttings ( Populus tremula x tremuloides T89) were grown in vitro for 3 weeks before being transferred in microcosms containing either natural or sterilised (gamma irradiated) soil and grown for 30 days. Before transplantation, roots and shoots ( n = 3) were sampled to confirm the axenic status of cuttings. After 1, 4, 15 and 30 days, we sampled the soil ( n = 3), the rhizosphere ( n = 3-5), the roots ( n = 3-5) and shoots ( n = 3-5) of poplar grown in natural soil for microbial communities analyses. In parallel, we collected the root exudates ( n = 5) after 4, 15 and 30 days of growth as well as the roots ( n = 15-25) and shoots ( n = 15-25) after 30 days of poplar grown in natural and sterilised soil for metabolomic analyses. At each sampling time, we measured the fresh biomass for both roots and shoots. Figure S2 . Influence of microorganisms on poplar growth after 30 days. No significant difference of aerial and root growth of poplar grown in natural and sterilised soil over 30 days ( n = 15-25, Wilcoxon, Bonferroni corrected, p.adj > 0.05). Figure S3. Influence of microorganisms on the root exudates profile over time. Dynamics of root exudates of poplar grown in presence or absence of microorganisms. Values correspond to the exudate mean concentration transformed by Log10. Letters indicate significant differences of metabolite concentration over time for each treatment ( n = 5, Kruskal-Wallis, FDR corrections, p.adj ≤ 0.05, Fisher LSD post-hoc test). Figure S4 . Influence of microorganisms on the composition and abundance of root exudates, root and shoot metabolites after 30 days of growth. Bars length and colors represent the log2 fold change of the relative abundance of metabolic compounds detected in natural soil (positive bars) versus sterilised soil (negative bars). * indicate significant difference of metabolite abundance between the two treatments ( n = 5-25, Wilcoxon, FDR corrections, p.adj ≤ 0.05). Figure S5 . Relative abundance of the different fungal trophic guilds detected in soil, rhizosphere, roots and shoots compartments over 30 days of growth. Ecological trophic guilds assignation was performed combining the FUNGuild and FungalTraits databases (Kruskal-Wallis, Bonferroni correction, p.adj > 0.05, Fisher LSD post-hoc test, n = 3-5). Figure S6. Relative abundance of the dominant (>1%) and diversity (Shannon index) of bacterial and fungal communities across habitat over 30 days of growth. Relative abundance of (A) bacterial and (B) fungal genera and their diversity in the four compartments sampled (soil, rhizosphere, root and shoot), (Kruskal-Wallis, FDR correction, p.adj ≤ 0.05, n = 3-5). Figure S7. Relative abundance of the dominant (>1%) and diversity (Shannon index) of 28S communities across habitat over 30 days of growth. Glomerales genera and their diversity in the 3 compartments sampled (soil, rhizosphere, and root), (Kruskal-Wallis, FDR correction, p.adj ≤ 0.05, n = 3-5). Figure S8. Relative abundance of fungal genera associated with specific habitat over 30 days of growth. Fungal taxa were chosen according to their significance related to particular time or habitat in multivariate partition analyses after multiple regression analyses and 1,000 permutation (FDR corrected, p.adj ≤ 0.01). Histograms represent the mean relative abundance of each taxa and bars indicate their standard error ( n = 3-5). Figure S9. Relative abundance of bacterial genera associated with specific habitat over 30 days of growth. Bacterial taxa were chosen according to their significance related to particular time or habitat in multivariate partition analyses after multiple regression analyses and 1,000 permutation (FDR corrected, p.adj ≤ 0.01). Histograms represent the mean relative abundance of each taxa and bars indicate their standard error ( n = 3-5). Supplementary Material 2. Table S1. Edaphic parameters between natural and sterilised (gamma-irradiated) soils. Supplementary Material 3. Table S2. (A) Root exudate composition over time between natural and sterilised (gamma-irradiated) soils. (B) Root and shoot metabolite composition after 30 days of growth between natural and sterilised (gamma-irradiated) soils. Supplementary Material 4. Table S3.(A) Microbial richness (OTUs number) and diversity (Shannon index) in soil, rhizopshere, poplar roots and poplar shoots over 30 days of growth. (B). Relative abundance of fungal trophic guilds detected in soil, rhizosphere and poplar roots and shoots over time. (C) Relative abundance of fungal genera (ITS) and corresponding guilds detected in soil, rhizosphere and poplar roots and shoots over time. (D) Relative abundance of Glomerales (28S) detected in soil, rhizosphere and poplar roots over time. (E) Relative abundance of bacterial genera (16S) detected in soil, rhizosphere and poplar roots and shoots over time. Supplementary Material 5. Table S4. Permutational multivariate ANOVA results (PERMANOVA) for differences in bacterial, fungal and Glomerales communities between compartment, time and time-compartment interaction. Supplementary Material 6. Table S5. Sequences of primers and PNA probes used for bacteria (16S), fungi (ITS) and glomerales (28S) in this study.
|
Factors influencing implementation of a care coordination intervention for cancer survivors with multiple comorbidities in a safety-net system: an application of the Implementation Research Logic Model | 9a7392b4-6788-44ee-ad15-0cd4ba3e28c3 | 10694894 | Internal Medicine[mh] | Early detection and treatment advances are driving steady increases in the number of cancer survivors. In many cases, living with cancer has become similar to living with common chronic conditions such as diabetes and heart disease. Most patients with cancer also have three or more chronic conditions requiring coordinated care between oncology, primary care, and other specialties. Primary care can play an important role in providing comprehensive, coordinated care for all conditions, including cancer. In fact, national cancer survivorship guidelines recommend that patients with cancer, known as cancer survivors, need primary care clinicians (PCC) as part of their care team because PCCs play an important role in providing comprehensive, whole person care . To achieve these goals, communication and coordination between primary care and oncology are paramount . Although central to the Institute of Medicine’s recommendations made nearly 20 years ago , the field is unclear how to ensure cancer survivors stay connected with primary care from start of cancer treatment and throughout their cancer survivorship journey. Well-established evidence from primary care settings demonstrates the effectiveness of using patient registries and designated care coordinators for improving patient outcomes for many chronic conditions such as diabetes and hypertension . Patient registries enable identification of all patients with specific conditions (e.g., breast cancer, diabetes) to proactively plan care delivery for timely provision of preventive and chronic disease care. Care coordinators play a critical role in managing referrals and connections between specialties and in ensuring that relevant clinical information to manage patients’ care is available and accessible to clinicians at point of care. However, few trials have studied implementation of these evidence-based interventions among patients living with cancer and chronic conditions, particularly in safety-net healthcare settings. Project CONNECT implemented these effective care coordination interventions (care coordinator plus patient registry) among cancer survivors with chronic conditions in an urban, integrated safety-net health system that serves disproportionately under- and uninsured ethnic/racial minority . The aims of this study are to (1) identify factors influencing implementation of Project CONNECT and (2) identify mechanisms through which the factors influenced implementation outcomes.
This multi-method qualitative study was embedded within a pragmatic trial that tested the implementation of a multicomponent evidence-based intervention aimed at enhancing care coordination for breast and colorectal cancer survivors with chronic conditions . Study procedures were approved by the University of Texas Southwestern Institutional Review Board (STU 102015–090), the University of Texas Health Science Center at Houston, and by Parkland Health Office of Research Administration, and reporting follows the Standards for Reporting Qualitative Research guidelines . Setting This study was conducted at Parkland Health (Parkland), the safety-net health system serving Dallas County, TX, USA . “Safety-net” healthcare systems are those that deliver healthcare primarily to uninsured, Medicaid, and other low-income and vulnerable patient populations . Parkland includes a network of 13 primary care clinics located in predominantly under resourced, ethnic/racial minority communities across Dallas County, and a centrally located main campus. The main campus consists of an inpatient hospital, outpatient surgery center, and specialty care clinics, which include multidisciplinary cancer clinics (i.e., medical, surgical, and radiation oncology clinics). Breast and colorectal cancer are the top two types of cancers treated at Parkland. Twenty-four percent of patients with breast cancer present with stages 3 or 4 breast cancer compared to 10% nationally; 61% of patients with colorectal cancer present with stages 3 and 4 cancer compared to 45% nationally . Evidence-based intervention components and implementation strategies Project CONNECT was a multicomponent evidence-based intervention and included (1) an electronic medical record (EMR)-based patient registry and (2) a care coordinator . The registry identified patients diagnosed with stages I–III breast or colorectal cancers plus one or more of the following chronic conditions: diabetes, hypertension, heart disease, chronic kidney disease, and/or chronic lung disease. The care coordinator was a registered nurse employed by Parkland who helped connect study eligible cancer survivors to primary care by facilitating appointments with primary care and coordinated care for patients between oncology and primary care. Strategies identified a priori to implement the intervention components into clinical practice including the following: identifying champions, changing records systems, creating new clinical workflows, and flexibility in implementation (Table ) . Guiding theoretical and conceptual frameworks The practice change model (PCM) and the Consolidated Framework for Implementation Research (CFIR) are determinant frameworks that guided data collection to identify barriers and facilitators of implementation . The PCM includes four elements (e.g., internal motivators, external motivators, resources, and opportunities for change) and depicts how these multi-level elements can impact intervention implementation in healthcare settings over time . The CFIR is a menu of individual-, program-, and organizational-level constructs consolidated from 19 theories and models related to intervention implementation . The constructs are organized into five overarching domains: intervention characteristics, outer setting, inner setting, characteristics of individuals, and process. These frameworks are highly complementary. PCM grounded our focus on drivers of practice operations and potential interactions, while CFIR helped us attend to relationships between our intervention, actors in the practice, and the structure and sequence of care delivery to maximize learning from our real-world setting . Proctor and colleagues’ taxonomy of implementation outcomes and the Implementation Research Logic Model (IRLM) informed our data analysis and synthesis .This study used qualitative data to assess two implementation outcomes at the patient and provider levels (i.e., intervention acceptability and appropriateness) and two outcomes at the organizational level (e.g., intervention adoption and penetration). Acceptability is defined as patient and/or provider satisfaction with the intervention, and appropriateness is defined as the perceived fit of the intervention in the setting . Adoption is defined as the initial uptake of the intervention; penetration is defined as the integration of an intervention within a clinical team, which is similar to the concept of “reach” in Glasgow’s RE-AIM framework . Adoption and penetration were assessed longitudinally during intervention implementation (Phase 2, see below) allowing assessment of continued adoption or utilization of the intervention beyond initial uptake. Finally, the IRLM is a visualization tool to depict causal pathways between intervention components, determinants (i.e., barriers and facilitators) of implementation, implementation strategies, mechanisms of action, and implementation . Mechanisms of action define how implementation strategies operate to influence outcomes. We used the IRLM to elucidate the relationships between determinants, mechanisms, and implementation outcomes. Data collection Trained investigators (R. T. H., P. M. C., S. C. L.) collected qualitative data throughout intervention implementation from September 2016 through June 2020. This included two phases of the study. Phase 1 of data collection occurred from September 2016 to September 2018, pre-intervention implementation. Phase 2 occurred from September 2018 to June 2020 during intervention implementation. We used purposive sampling to select clinical team members who varied by their roles and specialty to identify barriers and facilitators to delivering coordinated care for patients with cancer and chronic conditions. Study participants included clinicians (e.g., physicians, nurse practitioners), clinic staff (e.g., nurses, care coordinators, social workers, financial services coordinators), and health system leaders (e.g., unit managers, clinic managers, and medical service chiefs) in oncology, primary care, and specialty care. We recruited multiple participants for each role and unit to solicit diverse perspectives. Data sources included the following: (1) documents, (2) structured observations and field notes, and (3) semi-structured interviews with patients, providers, staff, and leaders from multiple departments across the integrated safety-net system. Documents Documents included meeting notes, policies and procedures, correspondence among stakeholders, EMR screenshots, patient-facing materials, tools and checklists, and other resources. Documents were requested from providers, staff, and leaders, and they were also offered unsolicited by interviewees and observed stakeholders to clarify processes, provide supplementary information, or serve as historical records. Structured observations and field notes We used structured observation guides to facilitate consistent data capture of care coordination and practice change processes . Exemplar domains and questions included the following: evidence of team-based care (e.g., do oncology providers discuss other conditions or comorbidities?), documentation of practice (e.g., where do oncology providers document information related to the survivorship care plan or referrals for follow-up care after discharge?), patient access to information (e.g., do providers tell patients what to do in the event of acute needs?), continuity of care (e.g., how are subsequent appointments scheduled?), and team-based care (e.g., to what extent do providers engage patients in taking an active role in their care?). We also selected sites for observation to capture the patient pathway and provider/staff movement through the care coordination process (e.g., registration and intake areas, patient-provider interactions, provider-staff interactions and work areas, and nurse navigation, referral, and case management processes) . Semi-structured interviews Interviews were semi-structured to guide the interviewer through pre-planned topics while allowing for follow-up questions tailored to participant feedback and for additional unplanned questions to be incorporated as appropriate. Interview guides were iteratively developed by investigators and adapted to role and clinical unit. We anticipated barriers and facilitators to implementation based on the CFIR and PCM and, accordingly, focused the interviews on domains including the following: care coordination processes between oncology and primary care, perceptions of the role of the nurse coordinator and registry (interventions), challenges or gaps in care for cancer survivors with chronic conditions, communication about policies and procedures within clinics, EMR documentation and challenges, delineation of roles and expectations between oncology and primary care providers and staff, and patient feedback about areas of confusion or concern. Prior to participating in an interview, informed verbal consent was obtained from all study participants. Patients received a US $25 gift card in appreciation for their time. In accordance with Parkland policy, employees were not provided with an incentive to participate in research. Data analysis Immersion-crystallization processes Data analysis proceeded in four immersion-crystallization cycles, or repeated exposure to and synthesizing of data, to identify themes and categories . In cycle 1, the team developed two deductively driven thematic codebooks based on interview guide topics, pre- and post-intervention phases, and a preliminary review of documents ( n = 259 unique documents), field observations ( n = 11), and interview transcripts ( n = 140). Additional emergent themes were incorporated into the initial codebook drafts for the first 10% of transcripts, and the finalized codebook was used for remaining transcripts. Codebooks for the two phases included many of the same codes; however, each also included additional unique codes given differences in thematic foci and emergent findings during each phase. For example, Phase 1 codes included existing barriers to care coordination, organizational structure, and processes; Phase 2 codes included patient experiences, acuity of care, and transitions in care. All coding was completed in NVivo 12.0 (QSR, Australia). After coding all data, the team created node reports, summaries of data collected, and exemplar quotes for each code and identified codes tying together steps in the cancer care continuum to the intervention components: care coordination, survivorship planning, intervention, and intervention impact. In analysis cycle 2, the team applied codes from PCM and CFIR to the selected node reports focusing on identifying organizational inner setting characteristics, system resources, stakeholder motivations, and opportunities for change. The purpose of this cycle was to understand how and why care coordination processes occurred pre- and post-intervention. In cycle 3, the team returned to the findings from cycles 1 and 2, coding for in order to describe how the intervention components and care coordination processes mapped to implementation outcomes. Implementation outcomes were assessed from qualitative data; most validated quantitative implementation outcome measures were not available at the start of this study. In cycle 4, the team met weekly to interpret findings and synthesize data linking implementation strategies, determinants, mechanisms, and outcomes using the IRLM.
This study was conducted at Parkland Health (Parkland), the safety-net health system serving Dallas County, TX, USA . “Safety-net” healthcare systems are those that deliver healthcare primarily to uninsured, Medicaid, and other low-income and vulnerable patient populations . Parkland includes a network of 13 primary care clinics located in predominantly under resourced, ethnic/racial minority communities across Dallas County, and a centrally located main campus. The main campus consists of an inpatient hospital, outpatient surgery center, and specialty care clinics, which include multidisciplinary cancer clinics (i.e., medical, surgical, and radiation oncology clinics). Breast and colorectal cancer are the top two types of cancers treated at Parkland. Twenty-four percent of patients with breast cancer present with stages 3 or 4 breast cancer compared to 10% nationally; 61% of patients with colorectal cancer present with stages 3 and 4 cancer compared to 45% nationally .
Project CONNECT was a multicomponent evidence-based intervention and included (1) an electronic medical record (EMR)-based patient registry and (2) a care coordinator . The registry identified patients diagnosed with stages I–III breast or colorectal cancers plus one or more of the following chronic conditions: diabetes, hypertension, heart disease, chronic kidney disease, and/or chronic lung disease. The care coordinator was a registered nurse employed by Parkland who helped connect study eligible cancer survivors to primary care by facilitating appointments with primary care and coordinated care for patients between oncology and primary care. Strategies identified a priori to implement the intervention components into clinical practice including the following: identifying champions, changing records systems, creating new clinical workflows, and flexibility in implementation (Table ) .
The practice change model (PCM) and the Consolidated Framework for Implementation Research (CFIR) are determinant frameworks that guided data collection to identify barriers and facilitators of implementation . The PCM includes four elements (e.g., internal motivators, external motivators, resources, and opportunities for change) and depicts how these multi-level elements can impact intervention implementation in healthcare settings over time . The CFIR is a menu of individual-, program-, and organizational-level constructs consolidated from 19 theories and models related to intervention implementation . The constructs are organized into five overarching domains: intervention characteristics, outer setting, inner setting, characteristics of individuals, and process. These frameworks are highly complementary. PCM grounded our focus on drivers of practice operations and potential interactions, while CFIR helped us attend to relationships between our intervention, actors in the practice, and the structure and sequence of care delivery to maximize learning from our real-world setting . Proctor and colleagues’ taxonomy of implementation outcomes and the Implementation Research Logic Model (IRLM) informed our data analysis and synthesis .This study used qualitative data to assess two implementation outcomes at the patient and provider levels (i.e., intervention acceptability and appropriateness) and two outcomes at the organizational level (e.g., intervention adoption and penetration). Acceptability is defined as patient and/or provider satisfaction with the intervention, and appropriateness is defined as the perceived fit of the intervention in the setting . Adoption is defined as the initial uptake of the intervention; penetration is defined as the integration of an intervention within a clinical team, which is similar to the concept of “reach” in Glasgow’s RE-AIM framework . Adoption and penetration were assessed longitudinally during intervention implementation (Phase 2, see below) allowing assessment of continued adoption or utilization of the intervention beyond initial uptake. Finally, the IRLM is a visualization tool to depict causal pathways between intervention components, determinants (i.e., barriers and facilitators) of implementation, implementation strategies, mechanisms of action, and implementation . Mechanisms of action define how implementation strategies operate to influence outcomes. We used the IRLM to elucidate the relationships between determinants, mechanisms, and implementation outcomes.
Trained investigators (R. T. H., P. M. C., S. C. L.) collected qualitative data throughout intervention implementation from September 2016 through June 2020. This included two phases of the study. Phase 1 of data collection occurred from September 2016 to September 2018, pre-intervention implementation. Phase 2 occurred from September 2018 to June 2020 during intervention implementation. We used purposive sampling to select clinical team members who varied by their roles and specialty to identify barriers and facilitators to delivering coordinated care for patients with cancer and chronic conditions. Study participants included clinicians (e.g., physicians, nurse practitioners), clinic staff (e.g., nurses, care coordinators, social workers, financial services coordinators), and health system leaders (e.g., unit managers, clinic managers, and medical service chiefs) in oncology, primary care, and specialty care. We recruited multiple participants for each role and unit to solicit diverse perspectives. Data sources included the following: (1) documents, (2) structured observations and field notes, and (3) semi-structured interviews with patients, providers, staff, and leaders from multiple departments across the integrated safety-net system. Documents Documents included meeting notes, policies and procedures, correspondence among stakeholders, EMR screenshots, patient-facing materials, tools and checklists, and other resources. Documents were requested from providers, staff, and leaders, and they were also offered unsolicited by interviewees and observed stakeholders to clarify processes, provide supplementary information, or serve as historical records. Structured observations and field notes We used structured observation guides to facilitate consistent data capture of care coordination and practice change processes . Exemplar domains and questions included the following: evidence of team-based care (e.g., do oncology providers discuss other conditions or comorbidities?), documentation of practice (e.g., where do oncology providers document information related to the survivorship care plan or referrals for follow-up care after discharge?), patient access to information (e.g., do providers tell patients what to do in the event of acute needs?), continuity of care (e.g., how are subsequent appointments scheduled?), and team-based care (e.g., to what extent do providers engage patients in taking an active role in their care?). We also selected sites for observation to capture the patient pathway and provider/staff movement through the care coordination process (e.g., registration and intake areas, patient-provider interactions, provider-staff interactions and work areas, and nurse navigation, referral, and case management processes) . Semi-structured interviews Interviews were semi-structured to guide the interviewer through pre-planned topics while allowing for follow-up questions tailored to participant feedback and for additional unplanned questions to be incorporated as appropriate. Interview guides were iteratively developed by investigators and adapted to role and clinical unit. We anticipated barriers and facilitators to implementation based on the CFIR and PCM and, accordingly, focused the interviews on domains including the following: care coordination processes between oncology and primary care, perceptions of the role of the nurse coordinator and registry (interventions), challenges or gaps in care for cancer survivors with chronic conditions, communication about policies and procedures within clinics, EMR documentation and challenges, delineation of roles and expectations between oncology and primary care providers and staff, and patient feedback about areas of confusion or concern. Prior to participating in an interview, informed verbal consent was obtained from all study participants. Patients received a US $25 gift card in appreciation for their time. In accordance with Parkland policy, employees were not provided with an incentive to participate in research.
Documents included meeting notes, policies and procedures, correspondence among stakeholders, EMR screenshots, patient-facing materials, tools and checklists, and other resources. Documents were requested from providers, staff, and leaders, and they were also offered unsolicited by interviewees and observed stakeholders to clarify processes, provide supplementary information, or serve as historical records.
We used structured observation guides to facilitate consistent data capture of care coordination and practice change processes . Exemplar domains and questions included the following: evidence of team-based care (e.g., do oncology providers discuss other conditions or comorbidities?), documentation of practice (e.g., where do oncology providers document information related to the survivorship care plan or referrals for follow-up care after discharge?), patient access to information (e.g., do providers tell patients what to do in the event of acute needs?), continuity of care (e.g., how are subsequent appointments scheduled?), and team-based care (e.g., to what extent do providers engage patients in taking an active role in their care?). We also selected sites for observation to capture the patient pathway and provider/staff movement through the care coordination process (e.g., registration and intake areas, patient-provider interactions, provider-staff interactions and work areas, and nurse navigation, referral, and case management processes) .
Interviews were semi-structured to guide the interviewer through pre-planned topics while allowing for follow-up questions tailored to participant feedback and for additional unplanned questions to be incorporated as appropriate. Interview guides were iteratively developed by investigators and adapted to role and clinical unit. We anticipated barriers and facilitators to implementation based on the CFIR and PCM and, accordingly, focused the interviews on domains including the following: care coordination processes between oncology and primary care, perceptions of the role of the nurse coordinator and registry (interventions), challenges or gaps in care for cancer survivors with chronic conditions, communication about policies and procedures within clinics, EMR documentation and challenges, delineation of roles and expectations between oncology and primary care providers and staff, and patient feedback about areas of confusion or concern. Prior to participating in an interview, informed verbal consent was obtained from all study participants. Patients received a US $25 gift card in appreciation for their time. In accordance with Parkland policy, employees were not provided with an incentive to participate in research.
Immersion-crystallization processes Data analysis proceeded in four immersion-crystallization cycles, or repeated exposure to and synthesizing of data, to identify themes and categories . In cycle 1, the team developed two deductively driven thematic codebooks based on interview guide topics, pre- and post-intervention phases, and a preliminary review of documents ( n = 259 unique documents), field observations ( n = 11), and interview transcripts ( n = 140). Additional emergent themes were incorporated into the initial codebook drafts for the first 10% of transcripts, and the finalized codebook was used for remaining transcripts. Codebooks for the two phases included many of the same codes; however, each also included additional unique codes given differences in thematic foci and emergent findings during each phase. For example, Phase 1 codes included existing barriers to care coordination, organizational structure, and processes; Phase 2 codes included patient experiences, acuity of care, and transitions in care. All coding was completed in NVivo 12.0 (QSR, Australia). After coding all data, the team created node reports, summaries of data collected, and exemplar quotes for each code and identified codes tying together steps in the cancer care continuum to the intervention components: care coordination, survivorship planning, intervention, and intervention impact. In analysis cycle 2, the team applied codes from PCM and CFIR to the selected node reports focusing on identifying organizational inner setting characteristics, system resources, stakeholder motivations, and opportunities for change. The purpose of this cycle was to understand how and why care coordination processes occurred pre- and post-intervention. In cycle 3, the team returned to the findings from cycles 1 and 2, coding for in order to describe how the intervention components and care coordination processes mapped to implementation outcomes. Implementation outcomes were assessed from qualitative data; most validated quantitative implementation outcome measures were not available at the start of this study. In cycle 4, the team met weekly to interpret findings and synthesize data linking implementation strategies, determinants, mechanisms, and outcomes using the IRLM.
Data analysis proceeded in four immersion-crystallization cycles, or repeated exposure to and synthesizing of data, to identify themes and categories . In cycle 1, the team developed two deductively driven thematic codebooks based on interview guide topics, pre- and post-intervention phases, and a preliminary review of documents ( n = 259 unique documents), field observations ( n = 11), and interview transcripts ( n = 140). Additional emergent themes were incorporated into the initial codebook drafts for the first 10% of transcripts, and the finalized codebook was used for remaining transcripts. Codebooks for the two phases included many of the same codes; however, each also included additional unique codes given differences in thematic foci and emergent findings during each phase. For example, Phase 1 codes included existing barriers to care coordination, organizational structure, and processes; Phase 2 codes included patient experiences, acuity of care, and transitions in care. All coding was completed in NVivo 12.0 (QSR, Australia). After coding all data, the team created node reports, summaries of data collected, and exemplar quotes for each code and identified codes tying together steps in the cancer care continuum to the intervention components: care coordination, survivorship planning, intervention, and intervention impact. In analysis cycle 2, the team applied codes from PCM and CFIR to the selected node reports focusing on identifying organizational inner setting characteristics, system resources, stakeholder motivations, and opportunities for change. The purpose of this cycle was to understand how and why care coordination processes occurred pre- and post-intervention. In cycle 3, the team returned to the findings from cycles 1 and 2, coding for in order to describe how the intervention components and care coordination processes mapped to implementation outcomes. Implementation outcomes were assessed from qualitative data; most validated quantitative implementation outcome measures were not available at the start of this study. In cycle 4, the team met weekly to interpret findings and synthesize data linking implementation strategies, determinants, mechanisms, and outcomes using the IRLM.
Figure depicts the determinants, implementation strategies addressing the determinants, and observed mechanisms influencing each implementation outcome, and Table provides illustrative quotes for determinants. Appropriateness and acceptability The Project CONNECT intervention was appropriate for the safety-net healthcare setting and acceptable to system leaders. Three determinants (patient and clinician barriers, scientific evidence supporting the interventions, and engaged stakeholders) influenced appropriateness and acceptability primarily through three implementation strategies (creating new clinical workflows, changing records systems, identifying champions) during early implementation. Patient and clinician barriers The intervention addressed patient and clinician barriers to coordinating care between primary care and oncology. Patients with cancer and chronic conditions described confusion about which clinician to contact (i.e., primary care, oncology, or other specialists) for appointments, medication refills, questions, or concerns. Patients also assumed their providers were actively communicating through the shared EMR system about, for example, changes to medications or their treatment progress. Although clinicians documented these issues in the EMR, they were not proactively communicating across teams unless specific actions were warranted. Clinicians also expressed challenges caring for patients with cancer and chronic conditions. Primary care clinicians noted patients often did not continue seeing their primary care providers after cancer diagnoses. They then assumed oncologists or other specialists were addressing patients’ chronic conditions. Oncology clinicians, on the other hand, felt patients could not access primary care appointments in a timely manner during active treatment, and they would therefore fill prescriptions for chronic conditions, sometimes for up to 1 year. Changing the EMR system and creating new clinical workflows as implementation strategies facilitated development of the patient registry, which enabled the nurse coordinator to identify patients with cancer and chronic conditions, to track them through their course of oncology treatment, and to proactively schedule appointments, thus addressing both patient and provider needs. Strength and quality of intervention evidence Oncology and primary care system leaders agreed the selected interventions were appropriate for their setting to address these patient needs, and leaders supported implementation based on the strength of the scientific evidence behind registry and nurse coordinator interventions in enhancing care coordination. These early champions recognized the interventions were familiar to clinical team members, had been deployed in other contexts across Parkland Health, and were reasonable to budget with the assistance of research funds. Identifying and engaging the champions further increased intervention acceptability. Stakeholder engagement The CONNECT intervention was designed with input from Parkland primary care and oncology leaders who were aware of patient and clinician barriers caring for patients with cancer and chronic conditions. Primary care and oncology system leaders informed the decision to have the nurse coordinator located in the oncology clinic rather than in primary care practices, arguing that patients with chronic conditions in “active” cancer treatment needed continuity with primary care for their chronic disease management needs. System leaders emphasized the need for care coordination from the time of cancer diagnosis, not just after end of active treatment. Over the course of the intervention, leaders proactively identified two nurse coordinators to fill the role. Thus, engaging system leaders in designing and implementing key intervention components facilitated a sense of ownership and influenced intervention appropriateness and acceptability. Adoption and penetration All physicians and advanced practice providers of the Parkland oncology clinic adopted the intervention initially. Initial adoption was facilitated by active stakeholder engagement and leadership support. System leaders facilitated warm handoffs (i.e., provider-to-provider exchange) between primary care and oncology clinicians who served as local site champions to facilitate adoption (aka Parkland implementers). Primary care implementers identified gaps in chronic disease monitoring and management for patients in active cancer treatment. They shared clinical expertise to change the EMR system integrating the patient registry functionality into the EPIC Reporting Workbench. Primary care and oncology care teams both actively engaged with the research team in developing new clinical workflows establishing the nurse coordinator’s role in coordinating care. In turn, the research team was able to provide technical assistance related to the evidence base to inform implementer efforts. As the intervention progressed over time, there were barriers to continued adoption which then also limited penetration. Two determinants (organizational communication networks and characteristics, intervention characteristics) limited continued adoption and penetration of intervention components during the mid-implementation phase of the study. Reflecting on and evaluating the implementation process and the flexibility embedded in implementation allowed the research team to be responsive to stakeholders, to identify new implementation strategies (e.g., remind providers and staff), and to ultimately facilitate continued use of the intervention and penetration over the course of implementation. Organizational communication networks and characteristics Organizational communication networks and characteristics impacted continued adoption. Communication about day-to-day activities related to the intervention occurred informally during staff huddles and often via word of mouth limiting dissemination of the intervention across the oncology clinic where the nurse coordinator was embedded. For example, the informal nature of team communication about the intervention limited the nurse coordinator’s ability to share information about the different ways she could support care coordination for cancer patients with chronic conditions. In addition, the safety-net system, a teaching hospital, also experienced frequent turnover in oncology fellows and staff. Without consistent communication and interaction with the nurse coordinator, many did not know the intervention was available to their patients. Compounding one another, these determinants led to limited staff and provider knowledge about the interventions and limited continued utilization of the intervention by oncology providers. Intervention characteristics Characteristics of the nurse coordinator also influenced continued adoption. At study rollout, a primary care registered nurse filled the role. Because the coordinator intervention was located in oncology, it took time for the primary care nurse to learn the communication networks and to integrate within the oncology team. One year into the study, the nurse left Parkland, and system leaders identified a seasoned oncology nurse to assume the nurse coordinator role. Her familiarity with oncology team members and clinic processes increased others’ adoption of the nurse coordinator intervention. Barriers to continued adoption coupled with intervention scope further limited intervention penetration. Penetration, in this context, refers to the extent of the intervention’s spread or reach across all oncology clinicians. Advanced practice providers (APPs) expressed difficulty in changing their workflow to engage the nurse coordinator only for breast and colorectal cancer patients (as defined by the research study) when they also experienced challenges connecting patients with other cancer types with their primary care doctors. APPs include licensed nonphysician providers such as nurse practitioners, physician’s assistants, and medical assistants. Although the nurse coordinator was employed by Parkland, the APPs viewed her as only available for the Project CONNECT “research” study, thus focusing only on patients with breast and colorectal cancer. This narrowly defined scope limited the integration of nurse coordinator services into usual clinic workflows and therefore limited intervention penetration. In addition, the nurse coordinator found that not all patients with cancer presented through the medical oncology clinic. In particular, some patients with stage 1 colorectal cancer who started in surgical oncology did not receive follow-up through the medical oncology clinic after surgery, so limiting intervention scope to the medical oncology clinic limited penetration. Reflecting and evaluating overtime Iterative data collection throughout implementation allowed for reflection and evaluation of the implementation process and allowed the research team and Parkland implementers to respond to stakeholder feedback and implementation barriers. Along with flexibility in implementation, this feedback loop enabled the team to address changes in determinants and adapt or identify new implementation strategies needed to increase adoption and penetration. Based on feedback, the nurse coordinator began attending meetings and huddles to inform new colleagues about her role and to continually remind existing staff and providers about her role. In addition, the research team expanded the intervention’s scope to include patients presenting at multiple sites (e.g., surgical oncology, emergency department) to increase penetration. Finally, changes to the workflow to include all patients with cancer and chronic conditions enabled the nurse coordinator to connect any patient needing chronic disease management to a primary care clinician. Flexibility in implementation The implementation strategy “flexibility in implementation” influenced all four implementation outcomes. Specifically, changes in intervention scope to include patients with any type of cancer increased APP acceptance of the nurse coordinator and therefore their adoption of the intervention. The change in scope allowed the nurse coordinator to further integrate with—or penetrate—the oncology team and expand intervention reach. In addition, continual stakeholder engagement and flexibility ensured intervention components remained appropriate and acceptable throughout implementation.
The Project CONNECT intervention was appropriate for the safety-net healthcare setting and acceptable to system leaders. Three determinants (patient and clinician barriers, scientific evidence supporting the interventions, and engaged stakeholders) influenced appropriateness and acceptability primarily through three implementation strategies (creating new clinical workflows, changing records systems, identifying champions) during early implementation. Patient and clinician barriers The intervention addressed patient and clinician barriers to coordinating care between primary care and oncology. Patients with cancer and chronic conditions described confusion about which clinician to contact (i.e., primary care, oncology, or other specialists) for appointments, medication refills, questions, or concerns. Patients also assumed their providers were actively communicating through the shared EMR system about, for example, changes to medications or their treatment progress. Although clinicians documented these issues in the EMR, they were not proactively communicating across teams unless specific actions were warranted. Clinicians also expressed challenges caring for patients with cancer and chronic conditions. Primary care clinicians noted patients often did not continue seeing their primary care providers after cancer diagnoses. They then assumed oncologists or other specialists were addressing patients’ chronic conditions. Oncology clinicians, on the other hand, felt patients could not access primary care appointments in a timely manner during active treatment, and they would therefore fill prescriptions for chronic conditions, sometimes for up to 1 year. Changing the EMR system and creating new clinical workflows as implementation strategies facilitated development of the patient registry, which enabled the nurse coordinator to identify patients with cancer and chronic conditions, to track them through their course of oncology treatment, and to proactively schedule appointments, thus addressing both patient and provider needs. Strength and quality of intervention evidence Oncology and primary care system leaders agreed the selected interventions were appropriate for their setting to address these patient needs, and leaders supported implementation based on the strength of the scientific evidence behind registry and nurse coordinator interventions in enhancing care coordination. These early champions recognized the interventions were familiar to clinical team members, had been deployed in other contexts across Parkland Health, and were reasonable to budget with the assistance of research funds. Identifying and engaging the champions further increased intervention acceptability. Stakeholder engagement The CONNECT intervention was designed with input from Parkland primary care and oncology leaders who were aware of patient and clinician barriers caring for patients with cancer and chronic conditions. Primary care and oncology system leaders informed the decision to have the nurse coordinator located in the oncology clinic rather than in primary care practices, arguing that patients with chronic conditions in “active” cancer treatment needed continuity with primary care for their chronic disease management needs. System leaders emphasized the need for care coordination from the time of cancer diagnosis, not just after end of active treatment. Over the course of the intervention, leaders proactively identified two nurse coordinators to fill the role. Thus, engaging system leaders in designing and implementing key intervention components facilitated a sense of ownership and influenced intervention appropriateness and acceptability.
The intervention addressed patient and clinician barriers to coordinating care between primary care and oncology. Patients with cancer and chronic conditions described confusion about which clinician to contact (i.e., primary care, oncology, or other specialists) for appointments, medication refills, questions, or concerns. Patients also assumed their providers were actively communicating through the shared EMR system about, for example, changes to medications or their treatment progress. Although clinicians documented these issues in the EMR, they were not proactively communicating across teams unless specific actions were warranted. Clinicians also expressed challenges caring for patients with cancer and chronic conditions. Primary care clinicians noted patients often did not continue seeing their primary care providers after cancer diagnoses. They then assumed oncologists or other specialists were addressing patients’ chronic conditions. Oncology clinicians, on the other hand, felt patients could not access primary care appointments in a timely manner during active treatment, and they would therefore fill prescriptions for chronic conditions, sometimes for up to 1 year. Changing the EMR system and creating new clinical workflows as implementation strategies facilitated development of the patient registry, which enabled the nurse coordinator to identify patients with cancer and chronic conditions, to track them through their course of oncology treatment, and to proactively schedule appointments, thus addressing both patient and provider needs.
Oncology and primary care system leaders agreed the selected interventions were appropriate for their setting to address these patient needs, and leaders supported implementation based on the strength of the scientific evidence behind registry and nurse coordinator interventions in enhancing care coordination. These early champions recognized the interventions were familiar to clinical team members, had been deployed in other contexts across Parkland Health, and were reasonable to budget with the assistance of research funds. Identifying and engaging the champions further increased intervention acceptability.
The CONNECT intervention was designed with input from Parkland primary care and oncology leaders who were aware of patient and clinician barriers caring for patients with cancer and chronic conditions. Primary care and oncology system leaders informed the decision to have the nurse coordinator located in the oncology clinic rather than in primary care practices, arguing that patients with chronic conditions in “active” cancer treatment needed continuity with primary care for their chronic disease management needs. System leaders emphasized the need for care coordination from the time of cancer diagnosis, not just after end of active treatment. Over the course of the intervention, leaders proactively identified two nurse coordinators to fill the role. Thus, engaging system leaders in designing and implementing key intervention components facilitated a sense of ownership and influenced intervention appropriateness and acceptability.
All physicians and advanced practice providers of the Parkland oncology clinic adopted the intervention initially. Initial adoption was facilitated by active stakeholder engagement and leadership support. System leaders facilitated warm handoffs (i.e., provider-to-provider exchange) between primary care and oncology clinicians who served as local site champions to facilitate adoption (aka Parkland implementers). Primary care implementers identified gaps in chronic disease monitoring and management for patients in active cancer treatment. They shared clinical expertise to change the EMR system integrating the patient registry functionality into the EPIC Reporting Workbench. Primary care and oncology care teams both actively engaged with the research team in developing new clinical workflows establishing the nurse coordinator’s role in coordinating care. In turn, the research team was able to provide technical assistance related to the evidence base to inform implementer efforts. As the intervention progressed over time, there were barriers to continued adoption which then also limited penetration. Two determinants (organizational communication networks and characteristics, intervention characteristics) limited continued adoption and penetration of intervention components during the mid-implementation phase of the study. Reflecting on and evaluating the implementation process and the flexibility embedded in implementation allowed the research team to be responsive to stakeholders, to identify new implementation strategies (e.g., remind providers and staff), and to ultimately facilitate continued use of the intervention and penetration over the course of implementation. Organizational communication networks and characteristics Organizational communication networks and characteristics impacted continued adoption. Communication about day-to-day activities related to the intervention occurred informally during staff huddles and often via word of mouth limiting dissemination of the intervention across the oncology clinic where the nurse coordinator was embedded. For example, the informal nature of team communication about the intervention limited the nurse coordinator’s ability to share information about the different ways she could support care coordination for cancer patients with chronic conditions. In addition, the safety-net system, a teaching hospital, also experienced frequent turnover in oncology fellows and staff. Without consistent communication and interaction with the nurse coordinator, many did not know the intervention was available to their patients. Compounding one another, these determinants led to limited staff and provider knowledge about the interventions and limited continued utilization of the intervention by oncology providers. Intervention characteristics Characteristics of the nurse coordinator also influenced continued adoption. At study rollout, a primary care registered nurse filled the role. Because the coordinator intervention was located in oncology, it took time for the primary care nurse to learn the communication networks and to integrate within the oncology team. One year into the study, the nurse left Parkland, and system leaders identified a seasoned oncology nurse to assume the nurse coordinator role. Her familiarity with oncology team members and clinic processes increased others’ adoption of the nurse coordinator intervention. Barriers to continued adoption coupled with intervention scope further limited intervention penetration. Penetration, in this context, refers to the extent of the intervention’s spread or reach across all oncology clinicians. Advanced practice providers (APPs) expressed difficulty in changing their workflow to engage the nurse coordinator only for breast and colorectal cancer patients (as defined by the research study) when they also experienced challenges connecting patients with other cancer types with their primary care doctors. APPs include licensed nonphysician providers such as nurse practitioners, physician’s assistants, and medical assistants. Although the nurse coordinator was employed by Parkland, the APPs viewed her as only available for the Project CONNECT “research” study, thus focusing only on patients with breast and colorectal cancer. This narrowly defined scope limited the integration of nurse coordinator services into usual clinic workflows and therefore limited intervention penetration. In addition, the nurse coordinator found that not all patients with cancer presented through the medical oncology clinic. In particular, some patients with stage 1 colorectal cancer who started in surgical oncology did not receive follow-up through the medical oncology clinic after surgery, so limiting intervention scope to the medical oncology clinic limited penetration. Reflecting and evaluating overtime Iterative data collection throughout implementation allowed for reflection and evaluation of the implementation process and allowed the research team and Parkland implementers to respond to stakeholder feedback and implementation barriers. Along with flexibility in implementation, this feedback loop enabled the team to address changes in determinants and adapt or identify new implementation strategies needed to increase adoption and penetration. Based on feedback, the nurse coordinator began attending meetings and huddles to inform new colleagues about her role and to continually remind existing staff and providers about her role. In addition, the research team expanded the intervention’s scope to include patients presenting at multiple sites (e.g., surgical oncology, emergency department) to increase penetration. Finally, changes to the workflow to include all patients with cancer and chronic conditions enabled the nurse coordinator to connect any patient needing chronic disease management to a primary care clinician.
Organizational communication networks and characteristics impacted continued adoption. Communication about day-to-day activities related to the intervention occurred informally during staff huddles and often via word of mouth limiting dissemination of the intervention across the oncology clinic where the nurse coordinator was embedded. For example, the informal nature of team communication about the intervention limited the nurse coordinator’s ability to share information about the different ways she could support care coordination for cancer patients with chronic conditions. In addition, the safety-net system, a teaching hospital, also experienced frequent turnover in oncology fellows and staff. Without consistent communication and interaction with the nurse coordinator, many did not know the intervention was available to their patients. Compounding one another, these determinants led to limited staff and provider knowledge about the interventions and limited continued utilization of the intervention by oncology providers.
Characteristics of the nurse coordinator also influenced continued adoption. At study rollout, a primary care registered nurse filled the role. Because the coordinator intervention was located in oncology, it took time for the primary care nurse to learn the communication networks and to integrate within the oncology team. One year into the study, the nurse left Parkland, and system leaders identified a seasoned oncology nurse to assume the nurse coordinator role. Her familiarity with oncology team members and clinic processes increased others’ adoption of the nurse coordinator intervention. Barriers to continued adoption coupled with intervention scope further limited intervention penetration. Penetration, in this context, refers to the extent of the intervention’s spread or reach across all oncology clinicians. Advanced practice providers (APPs) expressed difficulty in changing their workflow to engage the nurse coordinator only for breast and colorectal cancer patients (as defined by the research study) when they also experienced challenges connecting patients with other cancer types with their primary care doctors. APPs include licensed nonphysician providers such as nurse practitioners, physician’s assistants, and medical assistants. Although the nurse coordinator was employed by Parkland, the APPs viewed her as only available for the Project CONNECT “research” study, thus focusing only on patients with breast and colorectal cancer. This narrowly defined scope limited the integration of nurse coordinator services into usual clinic workflows and therefore limited intervention penetration. In addition, the nurse coordinator found that not all patients with cancer presented through the medical oncology clinic. In particular, some patients with stage 1 colorectal cancer who started in surgical oncology did not receive follow-up through the medical oncology clinic after surgery, so limiting intervention scope to the medical oncology clinic limited penetration.
Iterative data collection throughout implementation allowed for reflection and evaluation of the implementation process and allowed the research team and Parkland implementers to respond to stakeholder feedback and implementation barriers. Along with flexibility in implementation, this feedback loop enabled the team to address changes in determinants and adapt or identify new implementation strategies needed to increase adoption and penetration. Based on feedback, the nurse coordinator began attending meetings and huddles to inform new colleagues about her role and to continually remind existing staff and providers about her role. In addition, the research team expanded the intervention’s scope to include patients presenting at multiple sites (e.g., surgical oncology, emergency department) to increase penetration. Finally, changes to the workflow to include all patients with cancer and chronic conditions enabled the nurse coordinator to connect any patient needing chronic disease management to a primary care clinician.
The implementation strategy “flexibility in implementation” influenced all four implementation outcomes. Specifically, changes in intervention scope to include patients with any type of cancer increased APP acceptance of the nurse coordinator and therefore their adoption of the intervention. The change in scope allowed the nurse coordinator to further integrate with—or penetrate—the oncology team and expand intervention reach. In addition, continual stakeholder engagement and flexibility ensured intervention components remained appropriate and acceptable throughout implementation.
There is significant interest among researchers, clinicians, health system leaders, and policy makers in identifying optimal ways to coordinate care for cancer survivors, especially those who are under- and uninsured and most likely to have poor health outcomes. This study demonstrated that implementing a system-level evidence-based intervention to coordinate care for cancer survivors with chronic conditions between oncology and primary care in a safety-net health system was appropriate and acceptable to patients and health system stakeholders. While clinicians and clinical staff initially adopted the intervention, continued adoption and penetration of the intervention throughout the clinic were challenging even with support from motivated and engaged health system leaders. This is because the intervention, as designed, experienced challenges integrating into real-world practice. Continual evaluation and reflection allowed the research team to be responsive to stakeholder feedback in real time, to identify emerging determinants, and to develop new implementation strategies to increase acceptability, continued adoption, and penetration. Importantly, flexibility in implementation became a key implementation strategy to address barriers to adoption and penetration over time. While our intervention took place in a safety-net healthcare system in the USA, our findings about how to bridge cancer survivors’ care between PCCs and oncology clinicians are applicable more broadly as comprehensive cancer survivorship care approaches are needed globally for different health care system models . Determinants—such as patient and clinician barriers, lack of stakeholder engagement, the strength and quality of scientific evidence supporting care coordination interventions, and intervention characteristics—have been explored in the context of other interventions and other disease/patient targets , such as for diabetes and hypertension management. However, our study is the first to examine these determinants in the context of implementing an intervention for patients with cancer receiving care in a safety-net setting. This is significant because an increasing number of patients with cancers such as early-stage breast, colon, and rectal cancer are living decades after their initial diagnosis, thanks to significant advances in early detection and treatments. In such cases, cancer becomes another chronic condition that patients and their clinicians must manage including timely surveillance for recurrence and managing risks associated with cancer treatments and its sequelae. Thus, delineating determinants of adoption and implementation of evidence-based care coordination interventions shown to be effective for routine chronic conditions such as those used in Project CONNECT can aid health systems in coordinating care for patients with cancer and chronic conditions. Importantly, our focus on safety-net health systems has the potential to increase health equity by identifying determinants relevant for patients with signficant social and economic challenges and for under-resourced systems. More cancer survivorship care delivery research embedded in safety-net systems and community health centers is needed to improve care delivery outcomes. This study shows that determinants are dynamic rather than static constructs and change over time to influence multiple implementation outcomes. While researchers have theorized that determinants may change over time, few studies have embedded longitudinal evaluations such as ours that demonstrate how they change and the ways in which they influence implementation outcomes over time. For example, system leaders affirmed intervention appropriateness and contributed to initial acceptability at multiple levels. As implementation proceeded, it became clear that consistent, iterative engagement with site champions in primary and oncology care was necessary also to ensure continued adoption. Our study design enabled us to observe these changes and influences over time, and our iterative immersion-crystalization data analysis strategy enabled us to identify similar recursive relations between determinants, strategies, mechanisms, and outcomes. Similarly, this study shows how implementation outcomes are interrelated, influenced by determinants and other outcomes, and how it may be unclear at what point one outcome ends and another begins. Although adoption has often been viewed as the intention or initial decision to use an innovation, we consider adoption as both the initial uptake and continued use of the interventions. For example, while stakeholders initially adopted the nurse coordinator intervention, continued adoption and interaction with the nurse coordinator later waned among APPs, who felt the intervention was only relevant for some of their patients. This limited overall penetration into the system. Flexibility in implementation meant that we could rapidly evaluate and adapt in real time to facilitate continued use of the intervention. Recognizing the need for adaptations and responding to dynamic contexts are recommended strategies when designing for dissemination and sustainability . We hypothesize that the phenomena we captured in variable adoption and penetration may be early determinants of maintenance or institutionalization of an intervention into practice. In fact, in our recent meeting with director of Parkland Global Oncology, we learned that Project CONNECT interventions are still being used at Parkland in a modified form. Describing these interdependent relations is critically important to keep the field moving forward and for research among healthcare systems as they are complex systems adaptive to internal and external factors. Thus, mechanisms of how strategies address determinants to improve implementation and service outcomes are more likely to be inter-dependent rather than linear. Our study’s design and analytic methods helped bring this reality to light. Our data analysis strategy and use of the IRLM were fundamental to identifying, defining, often disentangling determinants and strategies, understanding their mechanisms, and linking them to implementation outcomes. Smith et al.’s IRLM has been used to guide design and evaluation of implementation studies, describe implementation barriers and facilitators, list hypothesized mechanisms, and engage stakeholders. Our analysis advanced the authors’ recommendation to use the tool to elucidate the evolving relations between determinants, implementation strategies, mechanisms, and implementation outcomes . Few studies collect the data needed to describe these changes over time. Our analysis exemplifies why continuous process evaluation data are needed longitudinally and why investing in mixed-method, comprehensive, and longitudinal evaluation data is crucial for rigorous implementation research. Our study also sheds light on the balance between degree to which an intervention is delivered as intended, i.e., implementation fidelity and flexibility needed for integrating the intervention in real-world settings . We argue that flexibility of implementation is necessary to accelerate translation of evidence-based interventions in real-world settings, and it does not necessarily constitute a decrease in fidelity to the evidence-based intervention. Both fidelity and flexibility are needed and can co-occur in equilibrium such that key functions of evidence-based interventions are implemented with fidelity, but the forms of the interventions themselves may differ across settings, or changes may be made to intervention forms in response to contextual barriers . As shown in our study, implementation flexibility enhanced adoption and penetration of the intervention. This may be a critical nuance that bears further scrutiny and may be a key ingredient in increasing uptake of evidence-based interventions into real-world settings. Study limitations The onset of COVID in North Texas disrupted elective care across the Parkland system, oncology clinic teams pivoted to telephone appointments, and the nurse coordinator was able to continue work remotely. Although direct research observation was temporarily interrupted, our relationships with key stakeholders enabled the research team to continue to collect data through email and telephone exchanges. In addition, a challenge we did not anticipate, but did document, was that annual updates to the Epic EMR could also “break” links even to existing Epic functionality, such as the Reporting Workbench, and needed to be monitored to ensure registry tools remained active. Future directions While our analysis here reports key determinants affecting implementation outcomes, it is yet unclear how increased adoption and penetration may influence team processes supporting care coordination that we did not observe. Although the evidence base for care coordination interventions is strong, the field’s understanding of how factors relevant to local settings shape implementation is still emerging . Forthcoming analyses of Project CONNECT intervention outcomes at the patient and system level could facilitate examination of maintenance and generate key questions to explore about earlier indicators around post-study intervention sustainability once the trial ended. Having described implementation outcomes here, subsequent analyses of clinical and patient-reported outcomes will help advance our understanding of how these EBIs may help optimize care for these vulnerable patient populations. Similarly, we did not explicitly set out to assess the effectiveness of a bundled implementation strategy, nor to test the separability of our multicomponent intervention. While future work could mount studies to examine these issues, from the perspective of addressing disparities in survivorship care delivery, implementation research should focus on better characterizing the interface between primary care and oncology and identify strategies to better integrate care delivery for cancer survivors such that the care they receive is seamless and addresses survivorship care guidelines holistically .
The onset of COVID in North Texas disrupted elective care across the Parkland system, oncology clinic teams pivoted to telephone appointments, and the nurse coordinator was able to continue work remotely. Although direct research observation was temporarily interrupted, our relationships with key stakeholders enabled the research team to continue to collect data through email and telephone exchanges. In addition, a challenge we did not anticipate, but did document, was that annual updates to the Epic EMR could also “break” links even to existing Epic functionality, such as the Reporting Workbench, and needed to be monitored to ensure registry tools remained active.
While our analysis here reports key determinants affecting implementation outcomes, it is yet unclear how increased adoption and penetration may influence team processes supporting care coordination that we did not observe. Although the evidence base for care coordination interventions is strong, the field’s understanding of how factors relevant to local settings shape implementation is still emerging . Forthcoming analyses of Project CONNECT intervention outcomes at the patient and system level could facilitate examination of maintenance and generate key questions to explore about earlier indicators around post-study intervention sustainability once the trial ended. Having described implementation outcomes here, subsequent analyses of clinical and patient-reported outcomes will help advance our understanding of how these EBIs may help optimize care for these vulnerable patient populations. Similarly, we did not explicitly set out to assess the effectiveness of a bundled implementation strategy, nor to test the separability of our multicomponent intervention. While future work could mount studies to examine these issues, from the perspective of addressing disparities in survivorship care delivery, implementation research should focus on better characterizing the interface between primary care and oncology and identify strategies to better integrate care delivery for cancer survivors such that the care they receive is seamless and addresses survivorship care guidelines holistically .
Effective and accepted interventions such as using population-based registry to track patients with cancer and chronic conditions and assigning a care coordinator to enhance primary care access can be implemented effectively in safety-net health systems. Adoption and penetration across the system can be further enhanced by allowing flexibility in how health systems choose to implement these interventions. Doing so with active and continual engagement of patient and health system partners presents the most promising approach to quickly translate effective interventions into real-world practice to improve care delivery and health outcomes for cancer survivors.
Additional file 1: Table 1. Immersion/crystallization cycles of data analysis. Table 2. Codes and themes for immersion/crystallization cycles of data analysis.
|
CM-Path Molecular Diagnostics Forum—consensus statement on the development and implementation of molecular diagnostic tests in the United Kingdom | 2a3bc382-0b17-4a24-907a-6e35bc398aa2 | 6889373 | Pathology[mh] | Pathology—the study of disease—has evolved significantly since its beginnings with Virchow and a purely morphological description of cellular alterations, to our current ability to make fine-resolution observations at the subcellular/molecular scale. We can now use this knowledge and modern molecular biological techniques to interrogate human tissue samples in increasingly sophisticated ways, with the ultimate aim of providing more accurate diagnoses that can better guide treatment choices. In the field of cellular pathology, it is now possible to supplement traditional light microscopic assessment of tissue samples with a vast array of information at genomic, epigenomic, transcriptomic, proteomic and metabolomic levels. Thus, molecular diagnostics is now the cornerstone of precision/personalised medicine, in which individual patients receive customised healthcare on the basis of their specific test results, and has the potential to revolutionise patient care and improve outcomes, as exemplified by its use in haematological malignancies. The application of molecular diagnostics is currently being expanded into other clinical areas; for example, in the United Kingdom (UK), the 100,000 Genomes Project has brought whole-genome sequencing into routine clinical practice by initially applying this technique to cancer and rare diseases. Despite the promises of molecular diagnostics, significant barriers have impeded its widespread clinical adoption. Until recently, there has been a lack of national strategy for molecular diagnostic testing with complex commissioning and funding arrangements. Moreover, the National Health Service (NHS) is currently poorly equipped to embrace fully this healthcare revolution. In particular, the substantial attrition of academic pathology in the UK over the past two decades, coupled with the increasing service demands placed on pathologists, means that many diagnostic laboratories lack the knowledge, expertise and capacity to introduce these new tests efficiently. In addition, the interaction between clinicians, academia, industry and regulators required to expedite the development of new molecular diagnostic tests and their introduction into clinical practice has not been uniformly present to date. Inception of a cross-sector molecular diagnostics forum In 2016, the National Cancer Research Institute (NCRI) launched its Cellular Molecular Pathology (CM-Path) initiative with the aim of supporting modernisation of pathology in the UK and, in so doing, to help to develop the workforce and infrastructure required to provide nationwide molecular diagnostic services ( https://cmpath.ncri.org.uk ). To advance pathology in the UK, and thus ensure that patients receive the highest quality of care possible, CM-Path recognises the value of collaborating with industry, regulators and other key stakeholders. To this end, members of CM-Path workstream 4 (‘Technology and Informatics’) convened the first meeting of the CM-Path Molecular Diagnostics Forum on 26th January 2018 at the Royal Society of Medicine in London. The overarching aims of the forum are as follows: To define infrastructure, regulatory and workflow requirements for the adoption of molecular diagnostics in NHS pathology laboratories; To develop protocols to ensure faster and more efficient implementation of emerging technologies and novel bespoke and validated molecular panels; To assist in the education/training of the workforce required to provide high-quality, nationwide molecular diagnostic services; To actively engage pathologists with industry and regulators to develop the next phase of molecular diagnostic tests; To form links with companies developing software to assist in test interpretation and correlation between molecular findings and clinical outcomes. Ultimately, we wish to ensure that all patients across the UK have equitable and rapid access to effective molecular diagnostic tests, whether developed by industry or academia. The objectives of this particular meeting, which was attended by 25 individuals including clinicians, academics and representatives from industry and regulatory bodies, were to define a ‘roadmap’ for molecular diagnostic test development and NHS implementation and to identify the challenges (and their possible solutions) that are likely to be encountered during these processes. The meeting commenced with invited case presentations on the development and implementation of new molecular diagnostic tests in rare ophthalmic disease (Professor Graeme Black, University of Manchester) and bladder cancer (Dr Andrew Feber, University College London), providing illuminating ‘real world’ insights into these processes. Summaries of the perspectives of industry and of the National Institute for Health and Care Excellence (NICE) on the current state of affairs were also presented by Jane Coppard (public affairs manager at Roche) and Rebecca Albrow (senior technical adviser in the NICE Diagnostics Assessment Programme), respectively. It was highlighted that NICE diagnostics guidance recommendations are typically made by the Diagnostic Advisory Committee (DAC), an independent decision-making body that bases its recommendations on review of clinical and economic evidence. Once recommendations are made, NICE diagnostics guidance is published on the NICE website and is disseminated to all stakeholders, which include professional societies, patient organisations and individual clinicians. NICE also creates tools to support the adoption of guidance but there are many factors that can hinder nationwide uptake. Until recently, there has been no systematic method of tracking the use of diagnostics within the NHS, and therefore, the impact of NICE recommendations cannot be directly evaluated. Developing a roadmap for the development and implementation of new molecular diagnostic tests In a subsequent breakout session, delegates were grouped by professional background and tasked to create a roadmap describing the stages in the development of a new molecular diagnostic test, from initial concept to clinical implementation. This is particularly important as, compared with therapeutics, the validation and approval processes for diagnostic tests are poorly defined. It quickly became clear that no single group was able to map the entire pathway, immediately justifying the value of arranging this multidisciplinary meeting. Ultimately, a final roadmap was agreed by consensus between the groups (Fig. ); access to carefully curated tissue specimens through biobanks, health economics and workforce education are key aspects that have central relevance to the entire process. The discussions were very much centred on test development in the UK, although many companies developing such products are multinational or would aim to market them internationally. Although not the focus of the workshop, it was also acknowledged that new diagnostic tests are often introduced alongside new therapies (as ‘companion diagnostics’), so the development of novel molecular diagnostic tests often occurs in parallel to drug development. In this instance, the clinical need would be very clear and specific at the outset but otherwise the overall roadmap would still be similar. Challenges to the implementation of new molecular diagnostic tests The groups were then mixed and asked to identify challenges that are likely to be encountered within the roadmap. Several key themes emerged during this discussion; importantly, a number of innovative solutions were also suggested (Table ). A follow-up meeting was held in October 2018 to discuss these challenges in greater detail, and to consider how our roadmap will likely be impacted by the reconfiguration of genomic laboratory services within NHS England that took place that month. By creating a single national testing network co-ordinated through seven Genomic Laboratory Hubs (GLHs), this reconfiguration aims to expedite widespread adoption of molecular diagnostics into routine clinical practice and to ensure that such tests are conducted to uniform standards, thus providing consistent and equitable care across the country. Building upon the success of the 100,000 Genomes Project, this project forms part of the Government’s Life Sciences Strategy, and aims to develop a world leading Genomic Medicine Service within the NHS, as well as to support scientific research and innovation more broadly. The new service now includes a National Genomic Test Directory for both cancer and rare and inherited diseases. This directory specifies which tests are available within the NHS and how they are funded, which patients are eligible to receive these tests and which technology platforms should be used to perform each test. The directory will be updated annually, based on recommendation from a Clinical and Scientific Expert Panel that will evaluate new genomic tests and determine which existing tests should be retired or replaced. The authors believe that this positive development will help with many of the challenges that we have identified but, crucially, it only currently covers genetic testing and not other forms of molecular diagnostics (e.g. infectious disease). Whilst this new system should help to deliver more uniform nationwide access to molecular diagnostic tests, some scope for local flexibility in testing strategy is likely to be of benefit to patient care. A crucial issue to consider when ordering a molecular diagnostic test is how this test is best integrated into each patient’s individual care pathway and we envisage that local multidisciplinary team (MDT) meetings will continue to play an important role in making such decisions. Some test results are needed more urgently than others and this can influence the type of test selected and whether this is performed locally or sent externally. For example, one-step nucleic acid amplification (OSNA) testing to detect cytokeratin 19 (CK19) mRNA copy numbers in homogenised axillary lymph node samples, as a marker of breast cancer sentinel lymph node metastasis, has been performed in some UK centres for many years, with rapid intraoperative results determining the requirement for nodal clearance as part of a one-step procedure. Likewise, lung cancer mutation status can have a significant impact upon immediate clinical management and rapid in-house testing can be very useful, particularly in the context of acutely unwell patients or where a prompt initial screening test result can avoid the need to perform further unnecessary tests (e.g. KRAS mutations are generally mutually exclusive with EGFR and ALK mutations in lung cancer, which therefore do not need to be tested for when a KRAS mutation is detected). Initially, MDTs may also wish to arrange local funding for specific tests, rather than incur the time penalty involved in sending samples away. Nevertheless, the majority of molecular diagnostic tests are generally not urgent (e.g screening for Lynch syndrome in colorectal cancer) and are therefore likely to be best performed in a centralised reference laboratory. Furthermore, over time, we hope that the GLHs will generate evidence to demonstrate that centralised testing can return results in a clinically relevant timeframe for most indications. Another reason to retain local testing might be when a centre has already developed expertise in the performance and interpretation of a specific test, which could not be delivered to the same standard through an associated GLH. It was felt by forum participants that GLHs could play an important role in the development of novel molecular tests by providing access to high-quality human tissue samples via linked academic biobanks and by assisting in test validation, particularly by facilitating rigorous comparison with established tests and by recruiting patients into clinical trials. Once an evidence base has been established, a key milestone for any new molecular test will be inclusion in the test directory and it is envisaged that this step could be aligned with approval by the NICE DAC. GLHs will also have responsibility for implementing newly approved tests, ideally working in collaboration with each other to ensure optimal quality control, and in monitoring test uptake and downstream clinical effects, for example by transmitting relevant information derived from genomic MDT meetings to a centralised repository of outcome data. Likely future challenges for the GLHs include extending molecular tests to include other ‘omics’ approaches (e.g., epigenomics, transcriptomics, proteomics and metabolomics) whilst at the same time ensuring standardised, high-quality performance of established techniques (e.g. PD-L1 immunohistochemistry in non-small-cell lung cancer, for which several different assays are available). This may also entail the incorporation of digital pathology, which is currently being promoted via an Innovate UK initiative with the establishment of five centres of excellence for digital pathology, image analysis and artificial intelligence. , Such approaches are likely to become part of integrated reporting, bringing together the clinical, morphological, immunohistochemical and molecular data, in order to improve diagnostics and patient management. Centralised testing offers many benefits but there are also potential downsides to such an approach, and lessons should be learnt from previous reconfigurations of pathology services. Whilst earlier consolidations have produced cost savings, a large initial financial investment is often required, for example to cover the cost of new transport networks and to develop the information technology (IT) infrastructure required to connect different hospitals/laboratories. Critically, the NHS workforce remains central to the provision of high-quality diagnostic testing and there is a risk of loss of valuable expertise amongst staff who are not based in GLHs. Furthermore, sending tissue samples away for testing may negatively impact upon the ability of ‘non-hub’ centres to contribute to biobanking activities that are critical to support biomedical research. Given these risks, and to foster a new molecular medicine culture within the NHS, it is imperative that the seven GLHs (and their associated ‘spoke’ hospitals) adopt a collaborative, rather than competitive, approach to service delivery. Importantly, shared leadership by pathology, genetic and clinical teams will be needed to deliver a truly integrated service. Nationwide delivery of a ‘cutting-edge’ molecular diagnostic service will require large-scale upskilling of the current laboratory workforce, as well as amendments to the training of medical students, junior doctors and clinical scientists. With this requirement in mind, CM-Path, in collaboration with other relevant organisations, is actively working to develop training opportunities in molecular pathology. , Importantly, a requirement for formal molecular pathology teaching is now included in the Royal College of Pathologists (RCPath) ‘Curriculum for Specialty Training in Histopathology’; a 2-week molecular pathology attachment for histopathology trainees is now advocated and trainee knowledge of this area will be evaluated both through workplace-based assessment and formal professional examinations. The curriculum is currently undergoing further revision and it is envisaged that molecular pathology will feature even more prominently in the next iteration. In parallel, Health Education England (HEE), in partnership with several leading UK universities, provides formal postgraduate qualifications in genomic medicine as part of its Genomics Education Programme, as well as numerous other online-learning resources ( https://www.genomicseducation.hee.nhs.uk ). In addition, a range of professional training courses in molecular pathology are also available: ‘ Molecular Pathology and Diagnosis of Cancer’ delivered by the Wellcome Genome Campus and RCPath, ‘ UK Molecular Diagnostics Training School ’ delivered by the Nottingham Molecular Pathology Node, ‘ Molecular Pathology Study Day ’ organised by the British Division of the International Academy of Pathology (BDIAP) and ‘ Getting to Grips with Genomics ’ which is a joint initiative between CM-Path, RCPath and HEE, and importantly, provides education in molecular pathology to both trainees and trainers alike. Finally, legal, accreditation and regulatory frameworks must be considered when selecting or developing new molecular diagnostic tests. New in vitro diagnostic devices (IVD) must be approved before clinical adoption; regulatory guidelines for such approval exist both within the UK and the European Union (EU). In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) is responsible for ensuring that medical devices are safe for clinical use. Currently, there is a Europe-wide transition to the new EU Regulation on In Vitro Diagnostic Medical Devices 2017/746. This regulation sets out a new pathway for certification that will be carried out by approved notified bodies and Conformité Européenne In Vitro Diagnostic (CE IVD) approval is a sign of conformity with European standards. Whilst still to be confirmed, it is likely that these changes will apply in the UK even after its withdrawal from the EU. In the UK, all molecular assays and laboratory processes must also be accredited by the United Kingdom Accreditation Service (UKAS) through meeting a range of different International Organization for Standardization (ISO) requirements. UKAS also requires that IVDs undergo external quality assessment (EQA), with such quality control exercises most commonly conducted by the United Kingdom National External Quality Assessment Service (UK NEQAS). In the United States of America, IVDs are classified based on likely patient risk and are usually required to undergo premarket approval (PMA), unless there is a specific exemption. Through the Molecular Diagnostics Forum, for example, CM-Path is working closely with the MHRA and The British In Vitro Diagnostic Association (BIVDA) in order to ensure that regulators are involved at an early stage in the development of new diagnostic tests. Conclusions and future perspectives Our NCRI CM-Path Molecular Diagnostics Forum meetings proved to be highly constructive in identifying strengths and weaknesses in the application of molecular pathology across the NHS and the group is committed to facilitate continued collaboration between pathology (in both the NHS and academia), industry and regulators. To our knowledge, this is the first cross-sector attempt at defining the roadmap for molecular diagnostic tests, from conception through to deployment and use in accredited laboratories within the NHS. Whilst this process is currently complex, we believe that many of the challenges that we have identified can be overcome through closer collaboration between key stakeholders and with the network of GLHs. The next forum meeting will have a specific emphasis on addressing optimal sample handling for molecular testing, how the new ‘hub and spoke’ arrangement of GLHs will impact upon specimen journey from patient to laboratory and how molecular testing at GLHs can be potentially integrated with digital pathology being performed at the above-mentioned five new centres. Lessons learned will be integrated into the roadmap, further developing molecular diagnostic capabilities in the UK. CM-Path would be delighted to hear from any individual or group who feel that the Molecular Diagnostics Forum is relevant to their work and who would like to attend future meetings—please email [email protected] to get in touch.
In 2016, the National Cancer Research Institute (NCRI) launched its Cellular Molecular Pathology (CM-Path) initiative with the aim of supporting modernisation of pathology in the UK and, in so doing, to help to develop the workforce and infrastructure required to provide nationwide molecular diagnostic services ( https://cmpath.ncri.org.uk ). To advance pathology in the UK, and thus ensure that patients receive the highest quality of care possible, CM-Path recognises the value of collaborating with industry, regulators and other key stakeholders. To this end, members of CM-Path workstream 4 (‘Technology and Informatics’) convened the first meeting of the CM-Path Molecular Diagnostics Forum on 26th January 2018 at the Royal Society of Medicine in London. The overarching aims of the forum are as follows: To define infrastructure, regulatory and workflow requirements for the adoption of molecular diagnostics in NHS pathology laboratories; To develop protocols to ensure faster and more efficient implementation of emerging technologies and novel bespoke and validated molecular panels; To assist in the education/training of the workforce required to provide high-quality, nationwide molecular diagnostic services; To actively engage pathologists with industry and regulators to develop the next phase of molecular diagnostic tests; To form links with companies developing software to assist in test interpretation and correlation between molecular findings and clinical outcomes. Ultimately, we wish to ensure that all patients across the UK have equitable and rapid access to effective molecular diagnostic tests, whether developed by industry or academia. The objectives of this particular meeting, which was attended by 25 individuals including clinicians, academics and representatives from industry and regulatory bodies, were to define a ‘roadmap’ for molecular diagnostic test development and NHS implementation and to identify the challenges (and their possible solutions) that are likely to be encountered during these processes. The meeting commenced with invited case presentations on the development and implementation of new molecular diagnostic tests in rare ophthalmic disease (Professor Graeme Black, University of Manchester) and bladder cancer (Dr Andrew Feber, University College London), providing illuminating ‘real world’ insights into these processes. Summaries of the perspectives of industry and of the National Institute for Health and Care Excellence (NICE) on the current state of affairs were also presented by Jane Coppard (public affairs manager at Roche) and Rebecca Albrow (senior technical adviser in the NICE Diagnostics Assessment Programme), respectively. It was highlighted that NICE diagnostics guidance recommendations are typically made by the Diagnostic Advisory Committee (DAC), an independent decision-making body that bases its recommendations on review of clinical and economic evidence. Once recommendations are made, NICE diagnostics guidance is published on the NICE website and is disseminated to all stakeholders, which include professional societies, patient organisations and individual clinicians. NICE also creates tools to support the adoption of guidance but there are many factors that can hinder nationwide uptake. Until recently, there has been no systematic method of tracking the use of diagnostics within the NHS, and therefore, the impact of NICE recommendations cannot be directly evaluated.
In a subsequent breakout session, delegates were grouped by professional background and tasked to create a roadmap describing the stages in the development of a new molecular diagnostic test, from initial concept to clinical implementation. This is particularly important as, compared with therapeutics, the validation and approval processes for diagnostic tests are poorly defined. It quickly became clear that no single group was able to map the entire pathway, immediately justifying the value of arranging this multidisciplinary meeting. Ultimately, a final roadmap was agreed by consensus between the groups (Fig. ); access to carefully curated tissue specimens through biobanks, health economics and workforce education are key aspects that have central relevance to the entire process. The discussions were very much centred on test development in the UK, although many companies developing such products are multinational or would aim to market them internationally. Although not the focus of the workshop, it was also acknowledged that new diagnostic tests are often introduced alongside new therapies (as ‘companion diagnostics’), so the development of novel molecular diagnostic tests often occurs in parallel to drug development. In this instance, the clinical need would be very clear and specific at the outset but otherwise the overall roadmap would still be similar.
The groups were then mixed and asked to identify challenges that are likely to be encountered within the roadmap. Several key themes emerged during this discussion; importantly, a number of innovative solutions were also suggested (Table ). A follow-up meeting was held in October 2018 to discuss these challenges in greater detail, and to consider how our roadmap will likely be impacted by the reconfiguration of genomic laboratory services within NHS England that took place that month. By creating a single national testing network co-ordinated through seven Genomic Laboratory Hubs (GLHs), this reconfiguration aims to expedite widespread adoption of molecular diagnostics into routine clinical practice and to ensure that such tests are conducted to uniform standards, thus providing consistent and equitable care across the country. Building upon the success of the 100,000 Genomes Project, this project forms part of the Government’s Life Sciences Strategy, and aims to develop a world leading Genomic Medicine Service within the NHS, as well as to support scientific research and innovation more broadly. The new service now includes a National Genomic Test Directory for both cancer and rare and inherited diseases. This directory specifies which tests are available within the NHS and how they are funded, which patients are eligible to receive these tests and which technology platforms should be used to perform each test. The directory will be updated annually, based on recommendation from a Clinical and Scientific Expert Panel that will evaluate new genomic tests and determine which existing tests should be retired or replaced. The authors believe that this positive development will help with many of the challenges that we have identified but, crucially, it only currently covers genetic testing and not other forms of molecular diagnostics (e.g. infectious disease). Whilst this new system should help to deliver more uniform nationwide access to molecular diagnostic tests, some scope for local flexibility in testing strategy is likely to be of benefit to patient care. A crucial issue to consider when ordering a molecular diagnostic test is how this test is best integrated into each patient’s individual care pathway and we envisage that local multidisciplinary team (MDT) meetings will continue to play an important role in making such decisions. Some test results are needed more urgently than others and this can influence the type of test selected and whether this is performed locally or sent externally. For example, one-step nucleic acid amplification (OSNA) testing to detect cytokeratin 19 (CK19) mRNA copy numbers in homogenised axillary lymph node samples, as a marker of breast cancer sentinel lymph node metastasis, has been performed in some UK centres for many years, with rapid intraoperative results determining the requirement for nodal clearance as part of a one-step procedure. Likewise, lung cancer mutation status can have a significant impact upon immediate clinical management and rapid in-house testing can be very useful, particularly in the context of acutely unwell patients or where a prompt initial screening test result can avoid the need to perform further unnecessary tests (e.g. KRAS mutations are generally mutually exclusive with EGFR and ALK mutations in lung cancer, which therefore do not need to be tested for when a KRAS mutation is detected). Initially, MDTs may also wish to arrange local funding for specific tests, rather than incur the time penalty involved in sending samples away. Nevertheless, the majority of molecular diagnostic tests are generally not urgent (e.g screening for Lynch syndrome in colorectal cancer) and are therefore likely to be best performed in a centralised reference laboratory. Furthermore, over time, we hope that the GLHs will generate evidence to demonstrate that centralised testing can return results in a clinically relevant timeframe for most indications. Another reason to retain local testing might be when a centre has already developed expertise in the performance and interpretation of a specific test, which could not be delivered to the same standard through an associated GLH. It was felt by forum participants that GLHs could play an important role in the development of novel molecular tests by providing access to high-quality human tissue samples via linked academic biobanks and by assisting in test validation, particularly by facilitating rigorous comparison with established tests and by recruiting patients into clinical trials. Once an evidence base has been established, a key milestone for any new molecular test will be inclusion in the test directory and it is envisaged that this step could be aligned with approval by the NICE DAC. GLHs will also have responsibility for implementing newly approved tests, ideally working in collaboration with each other to ensure optimal quality control, and in monitoring test uptake and downstream clinical effects, for example by transmitting relevant information derived from genomic MDT meetings to a centralised repository of outcome data. Likely future challenges for the GLHs include extending molecular tests to include other ‘omics’ approaches (e.g., epigenomics, transcriptomics, proteomics and metabolomics) whilst at the same time ensuring standardised, high-quality performance of established techniques (e.g. PD-L1 immunohistochemistry in non-small-cell lung cancer, for which several different assays are available). This may also entail the incorporation of digital pathology, which is currently being promoted via an Innovate UK initiative with the establishment of five centres of excellence for digital pathology, image analysis and artificial intelligence. , Such approaches are likely to become part of integrated reporting, bringing together the clinical, morphological, immunohistochemical and molecular data, in order to improve diagnostics and patient management. Centralised testing offers many benefits but there are also potential downsides to such an approach, and lessons should be learnt from previous reconfigurations of pathology services. Whilst earlier consolidations have produced cost savings, a large initial financial investment is often required, for example to cover the cost of new transport networks and to develop the information technology (IT) infrastructure required to connect different hospitals/laboratories. Critically, the NHS workforce remains central to the provision of high-quality diagnostic testing and there is a risk of loss of valuable expertise amongst staff who are not based in GLHs. Furthermore, sending tissue samples away for testing may negatively impact upon the ability of ‘non-hub’ centres to contribute to biobanking activities that are critical to support biomedical research. Given these risks, and to foster a new molecular medicine culture within the NHS, it is imperative that the seven GLHs (and their associated ‘spoke’ hospitals) adopt a collaborative, rather than competitive, approach to service delivery. Importantly, shared leadership by pathology, genetic and clinical teams will be needed to deliver a truly integrated service. Nationwide delivery of a ‘cutting-edge’ molecular diagnostic service will require large-scale upskilling of the current laboratory workforce, as well as amendments to the training of medical students, junior doctors and clinical scientists. With this requirement in mind, CM-Path, in collaboration with other relevant organisations, is actively working to develop training opportunities in molecular pathology. , Importantly, a requirement for formal molecular pathology teaching is now included in the Royal College of Pathologists (RCPath) ‘Curriculum for Specialty Training in Histopathology’; a 2-week molecular pathology attachment for histopathology trainees is now advocated and trainee knowledge of this area will be evaluated both through workplace-based assessment and formal professional examinations. The curriculum is currently undergoing further revision and it is envisaged that molecular pathology will feature even more prominently in the next iteration. In parallel, Health Education England (HEE), in partnership with several leading UK universities, provides formal postgraduate qualifications in genomic medicine as part of its Genomics Education Programme, as well as numerous other online-learning resources ( https://www.genomicseducation.hee.nhs.uk ). In addition, a range of professional training courses in molecular pathology are also available: ‘ Molecular Pathology and Diagnosis of Cancer’ delivered by the Wellcome Genome Campus and RCPath, ‘ UK Molecular Diagnostics Training School ’ delivered by the Nottingham Molecular Pathology Node, ‘ Molecular Pathology Study Day ’ organised by the British Division of the International Academy of Pathology (BDIAP) and ‘ Getting to Grips with Genomics ’ which is a joint initiative between CM-Path, RCPath and HEE, and importantly, provides education in molecular pathology to both trainees and trainers alike. Finally, legal, accreditation and regulatory frameworks must be considered when selecting or developing new molecular diagnostic tests. New in vitro diagnostic devices (IVD) must be approved before clinical adoption; regulatory guidelines for such approval exist both within the UK and the European Union (EU). In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) is responsible for ensuring that medical devices are safe for clinical use. Currently, there is a Europe-wide transition to the new EU Regulation on In Vitro Diagnostic Medical Devices 2017/746. This regulation sets out a new pathway for certification that will be carried out by approved notified bodies and Conformité Européenne In Vitro Diagnostic (CE IVD) approval is a sign of conformity with European standards. Whilst still to be confirmed, it is likely that these changes will apply in the UK even after its withdrawal from the EU. In the UK, all molecular assays and laboratory processes must also be accredited by the United Kingdom Accreditation Service (UKAS) through meeting a range of different International Organization for Standardization (ISO) requirements. UKAS also requires that IVDs undergo external quality assessment (EQA), with such quality control exercises most commonly conducted by the United Kingdom National External Quality Assessment Service (UK NEQAS). In the United States of America, IVDs are classified based on likely patient risk and are usually required to undergo premarket approval (PMA), unless there is a specific exemption. Through the Molecular Diagnostics Forum, for example, CM-Path is working closely with the MHRA and The British In Vitro Diagnostic Association (BIVDA) in order to ensure that regulators are involved at an early stage in the development of new diagnostic tests.
Our NCRI CM-Path Molecular Diagnostics Forum meetings proved to be highly constructive in identifying strengths and weaknesses in the application of molecular pathology across the NHS and the group is committed to facilitate continued collaboration between pathology (in both the NHS and academia), industry and regulators. To our knowledge, this is the first cross-sector attempt at defining the roadmap for molecular diagnostic tests, from conception through to deployment and use in accredited laboratories within the NHS. Whilst this process is currently complex, we believe that many of the challenges that we have identified can be overcome through closer collaboration between key stakeholders and with the network of GLHs. The next forum meeting will have a specific emphasis on addressing optimal sample handling for molecular testing, how the new ‘hub and spoke’ arrangement of GLHs will impact upon specimen journey from patient to laboratory and how molecular testing at GLHs can be potentially integrated with digital pathology being performed at the above-mentioned five new centres. Lessons learned will be integrated into the roadmap, further developing molecular diagnostic capabilities in the UK. CM-Path would be delighted to hear from any individual or group who feel that the Molecular Diagnostics Forum is relevant to their work and who would like to attend future meetings—please email [email protected] to get in touch.
|
The characteristics, occurrence, and toxicological effects of alternariol: a mycotoxin | bf1eb677-e2db-4c81-87ec-4371ee3af47c | 11106155 | Microbiology[mh] | Mycotoxins-producing fungi belong to various fungal genera mainly Penicillium , Fusarium , Aspergillus , and Alternaria (Greeff-Laubscher et al. ). The genus Alternaria was originally described in 1816, with an increasing number of species being characterized since then (Ostry ). The major mycotoxin-producing Alternaria species is Alternaria alternata . Other mycotoxin-producing Alternaria species are: Alternaria arborescens , Alternaria blumeae , Alternaria tenuissima , Alternaria tenuissima , Alternaria arborescens , Alternaria longipes , Alternaria radicina , Alternaria dauci , Alternaria infectoria (Nan et al. ). Alternaria species are traditionally classified based on the morphology of reproductive structures and sporulation patterns. Nowadays, molecular techniques are being used for fungal classification as a more reliable and less tedious method (Zhang et al. ). Some Alternaria fungi are saprophytic, which are usually found in outdoor environments and in/on surfaces like soil, wall papers, and textiles (Ostry ). However, most Alternaria species are plant pathogens that can adapt to various environmental conditions, including low humidity and low temperatures. Therefore, besides affecting plants during their growth stage, Alternaria may be a major causative agent of post-harvest diseases in fruits and vegetables during storage and transportation (Ji et al. ). Alternaria species cause worldwide economic losses by affecting a number of plants’ leaves, stems, flowers, and fruits. They are ranked as one of the highest loss-causing fungal genera among all plant pathogens (Behiry et al. ). Species belonging to Alternaria are necrotrophs, which can live on dead organic matter such as decaying wood and wood pulp, which allows them to survive for years in fields to infect future agricultural commodities (Chung ). They are also categorized as aeroallergens because of their light-weight spores can be dispersed by air (Grewling et al. ). Alternaria infection efficiency is enhanced by the melanized wall of its spores, which protects it from ultraviolet light and desiccation, and by the formation of multiple germ tubes per spore during germination (Fig. ) (Chain ). Alternaria mycelium growth occurs at an optimum temperature of between 18 and 25 °C. However, spore infection and germination can occur within a wide temperature range of between 4 and 35 °C (Chain ). During infection, Alternaria species produce host-specific, and non-host-specific phytotoxins and extracellular enzymes to destroy plant cell walls at the infection site, which plays a major role in pathogenicity against plants (Wu and Wu ). Due to their tolerance to wide environmental conditions, Alternaria can infect a range of produce in various geographic locations, which cause the propagation of its mycotoxins (Louro et al. ). Fruits and vegetables affected by Alternaria species show usually a visible rotten area, like the black mold on a tomato, to be avoided by consumers. In cereal grains, Alternaria cause a disease known as black point, which is characterized by the discoloration of the germ and seed. However, in some Alternaria diseases like the core rot of apples and black rot of citrus, the visible symptoms are only inside the plant, yet the mycotoxins would have diffused to all parts of the plant causing adverse health effects when consumed (Chain ; Pinto and Patriarca ). Fruit- and vegetable-based processed foods like jams and juices might contain levels of Alternaria mycotoxins due to the lack of industrial procedures to eliminate infected fresh produce prior to processing (Saleh and Goktepe ). Mycotoxin levels in fruit-based processed food also increases due to the lack of symptoms in fruits with Alternaria core infections (Patriarca ). Alternaria toxins have received an increased research interest in the last few years, enabling the development of advanced and quick detection methods (Han et al. , ). Fungi belonging to the Alternaria genus produce more than 70 known mycotoxins belonging to three different structural groups: dibenzopyrone derivatives, perylene derivatives, and tetramic acid derivative (Pinto and Patriarca ). The most toxicologically concerning Alternaria toxins are: alternariol (AOH), alternariol monomethylether (AME), tenuazonic acid (TeA), tentoxin (TEN), altertoxin II (ATX II) and altenuene (ALT) (Babič et al. ; Schultz et al. ). Those mycotoxins were isolated and characterized between the years 1953 and 1986, with AOH first being discovered in 1953 (Ostry ). The most studied among Alternaria mycotoxins are the ones with benzopyrone groups which include the two major toxins: AOH and AME (Escrivá et al. ). Alternariol is often found in grains, fruits, and fruit-based food products such as jams and juices (Puvača et al. ). High levels of AOH have also been encountered in legumes, nuts, tomato and oilseed foods (Solhaug et al. ). Alternaria mycotoxins can cause, like other mycotoxins, many adverse health effects in humans. In the last decade, scientists have proven in vitro Alternaria mycotoxins toxicity. Mutagenicity of Alternaria mycotoxin, in general, and the genotoxicity of AOH and AME, in particular, have been well demonstrated by showing DNA damage caused by indirect mechanisms (Aichinger et al. ). In addition, there is a correlation between the occurrence of Alternaria mycotoxins and esophageal cancer in the literature (Solhaug et al. ). Alternariol has also shown similarity to estrogen, which suggests a major endocrine disruptive role of AOH (Stiefel and Stintzing ). Despite multiple studies proving the risks of Alternaria mycotoxins, worldwide regulation for these mycotoxins in food is still lacking. Exceptionally, the Bavarian health and food safety authority specified the tenuazonic acid limit in sorghum/millet-based infant food at 500 µg/kg content (Ji et al. ). In addition, the European Food Safety Authority (EFSA) performed a risk assessment for four of the known Alternaria mycotoxins (alternariol, alternariol monomethyl ether, tenuazonic acid, and tentoxin). As a result, thresholds for toxicological concern (TTC) levels of the four mycotoxins were set (EFSA ). Several review articles have been published in the field over the past few years. However, most of them are related to the modes of detection of mycotoxins, to mycotoxins in specific commodities, or to Alternaria mycotoxins in general. Recent reviews focusing on AOH are lacking. The present review focuses on the characteristics of AOH, its environmental fate, its possible routes of exposure, its occurrence in different food products in the last decade, its toxicity on cells and animal models as occurring in the literature in the last two years, its carcinogenicity and anticancer activity as well as its possible control methods. This comprehensive review would serve as a guideline about AOH for mycotoxins regulating and policies developing entities, and for food scientists and health risk assessors around the world.
Environmental factors including temperature and water activity are among the most significant factors in affecting mycotoxigenic fungi growth at pre-harvest and post-harvest levels (Gab-Allah et al. ). Anthropogenic activities including large-scale deforestation, the usage of fossil fuel as the main energy source, the over-exploitation of Earth’s resources, and other human activities have contributed to global climate change (Vagelas and Leontopoulos ). Concentrations of anthropogenic greenhouse gases (GHG) including methane, carbon dioxide, nitrous oxide, and chlorofluorocarbons have increased in the atmosphere in recent decades, resulting in global warming (Reineke and Schlömann ). Resulting climatic changes vary regionally. More frequent heat waves, extreme temperatures and precipitation events are expected in a number of regions. Yearly mean precipitation is expected to increase at high latitudes, many mid-latitude wet regions, and the equatorial Pacific; a decrease is anticipated in many mid-latitude and subtropical dry regions resulting in droughts (Medina et al. ). Global warming and its associated changes in climate are likely to lead to an increased number of biotic and abiotic stresses on crops which would have variable effects on the interactions between crops and fungal pathogens such as mycotoxigenic fungi (Medina et al. ). Mycotoxins are climate-dependent, plant-related, and storage-associated problems. They are influenced by certain non-infectious factors such as the bioavailability of nutrients and insect damage, which in turn are driven by climatic conditions. Climate represents the key agro-ecosystem driving force of fungal contamination in agricultural commodities and therefore, in mycotoxin production (Paterson and Lima ). An example of the effect of a climate change-related stress on the levels of fungal infections was observed on maize in northern Italy, between 2003 and 2004. Prolonged drought conditions and extreme elevated temperatures resulted in stressing maize plants, which made them more prone to fungal infections (Giorni et al. ). Quantitative estimations of the effects of global warming on mycotoxin contamination were conducted on Deoxynivalenol (DON) in wheat in northwestern Europe and on Aflatoxin B1 (AFB1) in maize and wheat in Europe. Results revealed the increase in contamination levels in both crops as a result of future climate (Medina et al. ). In general, the increase in temperatures in areas with originally cool weather or temperate conditions might make those areas more liable to aflatoxins, Ochratoxin A, Patulin and other mycotoxins related to warm areas. Avoiding post-harvest diseases in such case would come with an increased cost (Tsitsigiannis et al. ). On the other hand, a possible positive effect of climate change is the excessive increase in temperatures in areas of the globe that are already hot, which might lead to the extinction of certain mycotoxin-producing fungi (Paterson and Lima ). Future changes in rainfall and temperature will modify the entire ecosystem. Modifications related to both the extinction and appearance of new insect and plant species would definitely affect the availability of fungal strains and therefore might bring novel mycotoxin threats to crops (Tsitsigiannis et al. ). Shifting geographic distribution of mycotoxigenic fungi in response to global warming will make them harder to control (Medina et al. ). As an example of crops showing high levels of AOH contamination, storage of grains for instance would become more challenging with increased humidity, which might increase levels of AOH and other mycotoxins (Castañares et al. ). To avoid unexpected future problems that might cause unforeseen economic losses, a prediction system for possible mycotoxin levels could be developed. As weather forecasts have already become well developed to guide control strategies for various worldwide important diseases, it is similarly possible to relate weather-based plant disease forecasts to recent climate change models. We would, therefore, have an idea about the possible effects of environmental climate change in mycotoxins, including in their location, types, and extent of change (Paterson and Lima ). Climate change is only one of the megatrends that cause long-term global effects. The European Environmental Agency (EEA) has set 11 global megatrends among which globalization, technological development and climate change have a major impact on fungal distribution around the world (Magyar et al. ). Globalization has facilitated the transfer of fungal spores overseas, as shown in a study conducted in Qatar on the fungal strains found growing on fresh produce in the domestic market including Alternaria species . Results showed that the country of origin is the most significant factor affecting the level of contamination and the type of fungi (Saleh and Al-Thani ). The fungi detected on goods and packaging materials imported from different countries might infect local fresh produce and cause increases in mycotoxin levels and even lead to the introduction of new mycotoxins (Migliorini et al. ). The most common pathway for the movement of microorganisms across borders is through the trade of plants, especially potted living ornamental plants, where soil-borne microorganisms have a higher possibility of surviving transportation and becoming established at their destination. Alternaria is a common pathogen of plants’ green leaves which would increase the levels of mycotoxins worldwide (Santini et al. ). In the USA, annual plant imports increased between 1967 and 2010 by 500%. Similar trends were also observed in Europe and all over the world (Magyar et al. ). Nowadays, rapid transportation and reduced delivery times increase the survival of pathogens and lead to the spread of new species in new destinations. If an invasive fungus survives, adapts and multiplies in a new environment, its eradication becomes a great challenge. All of this adds to existing stresses and leads to unexpected mycotoxins in food products (Magyar et al. ). Technological development is also one of the anthropogenic activities that affects mycotoxin distribution around the world. Fungi are well adapted to colonizing human-made material, which make their distribution vulnerable to technological development. For example, the effect of introducing new building materials may lead to the growth of unexpected fungi, depending on regional climate. It is important to study the interaction between fungi, substrates and climatic factors before introducing new technologies in construction (Magyar et al. ). Finally, the increased application of chemical fungicides by farmers in agriculture has led to the emergence of multi-drug-resistant pathogens which are a public health concern (Saleh and Goktepe ). Development of biological controls that can limit fungal growth and, therefore, mycotoxin levels is a crucial research area to protect the environment from the adverse effects of chemicals and to combat multi-drug resistance strains (Saleh and Abu-Dieyeh ).
3,7,9-Trihydroxy-1-methyl-6H-dibenzo[b,d]pyran-6-one known as alternariol (AOH) (C 14 H 10 O 5 ) is a benzochromenone belonging to the family of isocoumarins and its derivatives. AOH has a molar mass of 258.229 g/mole and it crystallizes from ethanol as colorless needles (PubChem ). The melting point of AOH is 350 °C. It is soluble in most organic solvents and it gives a purple color reaction with ethanolic ferric chloride (Chain ). The chemical structure of AOH is represented in Fig. .
Detailed knowledge of the biosynthesis of AOH and its metabolism is important to develop accurate detection methods and to better evaluate residual toxicological risks (Zhao et al. ). Alternaria alternata produces more than 70 identified secondary metabolites, many of which are mycotoxins. Alternariol (AOH) and alternariol-9-methyl ether (AME) are two of the major food contaminants among Alternaria mycotoxins (Pinto and Patriarca ). However, the genetic-based biosynthesis of these two polyketide-based compounds is not well understood. One of the core enzyme categories involved in the biosynthesis of AOH and AME is polyketide synthases (PKSs) (Saha et al. ). Many of the biologically active fungal compounds are synthesized through polyketide biosynthesis pathways involving type I PKSs. Polyketide synthases are structurally and functionally similar to the mammalian fatty acid synthases (Cox and Simpson ). Type I PKSs are made of large protein structures consisting of multiple covalently connected domains, which play a role in various catalytic steps. The basic type I PKS module consists of an acyltransferase (AT) domain, which is responsible for the starting stage of the polyketide synthesis. The elongation stages are the function of the domain acyl carrier protein (ACP) which connects the starter group to the keto-synthase (KS) domain, to catalyze carbon-bond formation. Elongation is terminated by the function of the domain thioesterase (TE) which hydrolyzes the completed polyketide chain from the ACP domain. Furthermore, many other functional domains can exist in the structure of type I PKS depending on their role. This includes keto-reductase (KR), dehydratase (DH), enoyl-reductase (ER), and methyl-transferase (MT) (Weissman ). Saha et al. identified ten PKSs genes in the genome of A. alternata . Among the identified genes, two had their expression correlated with the production of AOH and AME (pksJ and pksH). The enzymes belong to type I-reducing polyketide synthases with 2222 and 2821 amino acid lengths, respectively (Saha et al. ). Figure represents a simple suggested model of AOH and its methylated derivative AME biosynthesis. In this model only ACP and KS domains are needed to initiate and elongate the polyketide, in addition to TE, to finalize it. In this model, biosynthesis starts with acetyl-CoA and consists of six condensation reactions, in each of which, activated malonate is integrated together with the loss of a carbonate group. As only two keto-synthase domains have been identified during alternariol biosynthesis, it could be likely that the six condensation reactions are catalyzed by the same domain. The aromatization process, which leads to the final natural product, could have happened before or after being liberated from the enzyme complex catalyzed by a thioesterase. Similarly, lactonization is possible either together with the liberation process or directly after it. Both steps (aromatization and lactonization) are likely to occur spontaneously without requiring enzymes (Saha et al. ). It is worth mentioning, when describing AOH biosynthesis, that changes in the osmotic status of the substrate affect alternariol production. High environmental osmolarity is usually transmitted to the transcriptional level of downstream regulated genes by high osmolarity glycerol (HOG) signaling a cascade which is a MAP kinase transduction pathway. Alternaria alternata HOG gene (AaHOG) plays an important role in alternariol biosynthesis regulation (Graf et al. ).
Alternaria toxins can be partially metabolized in plants to form a large number of conjugated metabolites. The toxicological relevance of modified mycotoxins forms and their occurrence in food is still largely unexplored. High-resolution mass spectrometry (HRMS) techniques are being developed to detect mycotoxins in their modified forms (Righetti et al. ). Mycotoxin bound to more polar substances such as glucose, amino acids and sulfates are known as masked mycotoxins, which are a health concern (Chain ). A study has demonstrated that AOH can conjugate well with glucose in cultured tobacco BY-2 cells, demonstrating that masked AOH can be directly formed in plant cells (Hildebrand et al. ). Alternaria alternata has also been shown to produce alongside AOH, a sulfate conjugate of the mycotoxin and sulfate/glucoside conjugate of AOH. Alternariol sulfate and AOH glucoside have been encountered in certain types of foods (Soukup et al. ; Walravens et al. ). Having free hydroxyl groups available for metabolic conjugation, AOH might occur in many masked forms including, alternariol-3-glucoside (AOH3G), alternariol-3-sulfate (AOH3S), alternariol monomethyl ether-3-glucoside (AME3G), and alternariol monomethyl ether-3-sulfate (AME3S) (Escrivá et al. ). Alternariol can undergo aromatic hydroxylation by CYP450 enzymes and by the enzymes of the first phase of metabolism producing catechols and hydroquinones, which are involved in reactive oxygen species (ROS) generation, to cause cell toxicity. This supports the relevance of a possible in vivo oxidative metabolism of this mycotoxin (Burkhardt et al. ). At the same time, the presence of AOH increases transcription of CYP450 in cells (Aichinger et al. ). Knowledge about the toxicity of the AOH oxidative metabolites is crucial in assessing the health risks of mycotoxin. Lower amounts of mycotoxins would be expected in processed foods, compared to fresh produce, provided that the processing steps deteriorate mycotoxin. In the case of AOH, a study conducted on the effect of baking in the level of mycotoxins in the final baked products (using spiked whole-meal wheat flour), showed that wet baking did not affect the level of AOH while dry baking caused a significant reduction in mycotoxin (Siegel et al. ). However, a long fermentation period showed a reduction in AOH in whole wheat dough preparation (Janić Hajnal et al. ). Alternaria species are a common cause of moldy core diseases in many fruits including citrus fruits and apples. Infected fruits cannot be detected as they might not show any visible external symptoms and therefore might be destined for industrialization (Pavicich et al. ). A study conducted on clear and cloudy apple juices, to evaluate the efficacy of the different treatment steps in lowering AOH levels, showed that the casual clear juice treatment stages, including pectinolytic enzyme treatment and pasteurization, did not have any significant effect on the level of AOH found in raw juice. However, fining with subsequent filtration, using activated charcoal/bentonite lowered the AOH level 79µg/L to the limit of quantification (4.6 µg/L). As for the cloudy juice processing steps, no step, including centrifugation or pasteurization, showed any effect on the studied level of the mycotoxin (Aroud et al. ). Therefore, if the fruits used in juice production have a certain AOH level, their juices are likely to maintain that contamination unless special treatments such as ultra-filtration (clarification step) are applied (Pavicich et al. ). Similarly, a recent study has proven the detection of AOH and it conjugates in the final drink after malting and brewing during beer preparation, which indicates that the processing stages are not enough to eliminate mycotoxin, originating from contaminated raw barley and malt (Prusova et al. ). Alternaria species are common in nature and they may affect in-field plants. A recent study has evaluated the levels of AOH in different parts of winter wheat plants by inoculating AOH into their nutrient solutions in hydroponic system to simulate soil contamination in the field. After one week of exposure, 5% of the inoculated AOH was recovered from the plants, with 58% in the roots, 16% in the crown, and 1% in the leaves. The recovered fraction increased to 21% of the inoculated amount after two weeks of exposure. Beside AOH recovery, 26 AOH conjugates were detected in different parts of the plants (Jaster-Keller et al. ). The study indicates that in-field contamination would lead to significant levels of mycotoxins and their masked forms in fresh produce, even without actual fungal contamination of a growing plant. Masked mycotoxins are mycotoxins associated with other molecules by covalent or non-covalent bonds, which allow them to escape the usual mycotoxin detection methods due to differences in polarity between the native mycotoxin and their metabolites. Since it is possible that a masked mycotoxin rereleases its native toxic form after enzymatic hydrolysis in the human digestive tract, human exposure levels to AOH can be higher than estimated. Very limited data is available on the occurrence of mycotoxin metabolites in food or animal feed (Escrivá et al. ). On the other hand, some recent studies demonstrate a decrease in Alternaria mycotoxins in general and AOH in particular in the digestive track. An in vitro short-term fecal incubation assay showed a reduction in mycotoxin concentrations. Additionally, DNA strand breaks usually induced by Alternaria mycotoxins were significantly quenched by the end of the 3h incubation period, while some other genotoxicity mechanisms were not affected. Ingested mycotoxins might interact with the gut microbiota and food constituents, which would modify their bioavailability and overall toxicity. Although results did not show a direct correlation between the metabolic activity of the gut microbiota and modifications in mycotoxin content, it is possible that mycotoxins were adsorbed into bacteria cells and into food constituents, which would lower their presence and their genotoxicity. Additional studies are needed to understand the fate of AOH in the digestive system (Crudo et al. ).
Humans and animals can get exposed to mycotoxins via the consumption of contaminated food products, including fruits and vegetables in their fresh and processed forms (El-Sayed et al. ). Fungal diseases can occur in-field through contaminated soil, air and irrigated water (Jain et al. ). They can also affect fresh produce at different post-harvest levels. Moreover, worker or harvesting equipment can also serve as a contamination source if hygienic practices are not strictly followed (Chatterjee et al. ). Infectious fungi might also affect fruits and vegetables during transportation or storage via contaminated containers. As Alternaria species can grow at low temperatures, they can infect produce during refrigerated transportation or storage (Li et al. ). At storage and display levels, cross-contamination becomes a major concern. Final food processing steps might also lead to fungal contamination (Saleh and Goktepe ). All of this can lead to mycotoxin-contaminated fresh produce, which is a major risk factor on human health (El-Sayed et al. ).
There are no regulations for AOH levels in food up to now despite its known toxicity (Ji et al. ). Alternariol is still within food contaminants called “emerging mycotoxin” (Aichinger et al. ). According to the European Food Safety Authority (EFSA) the threshold for toxicological concern (TTC) of AOH is 2.5 ng/Kg bw/day (Solhaug et al. ). Nevertheless, the nature of genotoxicity of AOH is not fully understood. The fact that AOH can be metabolized into DNA adducts indicates that even low absorbed amounts of mycotoxin are concerning (Aichinger et al. ). The highest recent human exposure rate to AOH, according to EFSA is in toddlers, with a mean exposure of between 3.8 and 71.6 ng/kg bw/day (EFSA ). This number is higher than the TTC for potential genotoxic substances, recently referred to as potential DNA-reactive mutagens, of 2.5 ng/Kg bw /day (EFSA ). Indicative levels for AOH are set in certain foods by the European Union (EU). Set levels are based on the EFSA database. Samples with contamination levels above indicative levels require further investigations to limit the factors leading to the presence of AOH, such as initial fresh produce contamination or elevated mycotoxins levels caused by food processing. However, indicative levels are not food safety levels. Alternariol indicative levels in cereal-based foods for infants and young children is as low as 2 µg/Kg. At the same time, 10 µg/Kg is the indicative level for processed tomato products and sunflower oil, while 30 µg/Kg is the indicative level for sesame and sunflower seeds (EU ). In addition, it is not fully understood if AOH in the masked-mycotoxin form can be hydrolyzed and absorbed in the gastrointestinal tract, which therefore adds to the overall exposure rate.
Alternaria mycotoxins in general and AOH in particular occur at high levels in fruits, fruit-based food products, vegetables, cereal-based food products, tomatoes, and tomato-based food products. Populations with diets based on these foods categories are the most exposed to AOH. This includes infants and toddlers. In addition, vegetarians are generally more exposed to mycotoxins and, therefore, to AOH than the general population (EFSA ). Analysis conducted on contamination levels in food products and mean consumption data of those food products showed possible exposure levels to different mycotoxins. A study conducted on tomato products, baked products, sunflower seeds, fruit juices and vegetable oils showed that, based on consumption rates of the population studied, the average daily exposure to AOH might reach 1400% of the suggested EFSA TTC level (Hickert et al. ).
The main route of exposure to mycotoxins is the direct consumption of contaminated food products (Saleh and Goktepe ). Prolonged exposure to AOH has adverse effects on human health (El-Sayed et al. ). Advanced analytical methods for mycotoxin detection from fresh produce and food-based products are crucial in determining contamination levels and therefore in setting appropriate toxicological standards. The determination of Alternaria mycotoxins is largely based on a sequence of steps, starting with the pre-treatment of samples, followed by clean-up through solvent partitioning or solid phase extraction (Gab-Allah et al. ). Solid–liquid extraction with acetonitrile or ethyl acetate is the most common extraction method (Escrivá et al. ). The final separation and detection of mycotoxins occur through different methods, including chromatographic techniques (thin layer chromatography; high-performance liquid chromatography (HPLC); liquid chromatography–mass spectrometry (LC–MS); gas chromatography–mass spectrometry (GC–MS) and others), immunological techniques (enzyme-linked immunosorbent assay (ELISA); lateral flow immune-chromatographic assay (LFIA); fluorescence polarization immunoassay (FPIA) and others), biosensors techniques, and some sophisticated methods such as near-infrared spectroscopy (NMR) and others (Gab-Allah et al. ). Worldwide, multiple studies have surveilled the levels of AOH in fruits, vegetables and derived products, mainly in tomatoes, apples, cereals, and cereals by-products (Escrivá et al. ). Stability of mycotoxins during food processing is a major factor that adds to a mycotoxin’s significance as a risk factor (Avîrvarei et al. ). Being available in cereals, stability of Alternaria mycotoxins was evaluated during wet and dry baking, most of Alternaria toxins were stable during wet baking while significant degradation occurs during dry baking with AME and AOH being the most stable (Siegel et al. ). Alternariol showed heat stability up to 100 °C in sunflower flour (Lee et al. ). Stability of AOH has also been evaluated in beverages such as apple juice and wine to show stability up to five weeks in spiked apple juice and up to eight days in spiked white wine at room temperature (Fernández-Cruz et al. ). The stability shown, highlights the importance of AOH surveillance analyses in food products. Considering the possibility of co-occurrence of multiple mycotoxins in food products makes the presence of even trace amounts of a particular mycotoxin significant (Muñoz-Solano and González-Peñas ). Table summarizes the contamination levels of AOH in food products as reported by studies conducted in the last ten years. Levels of AOH were recorded in 127 commodities belonging to different food categories including beverages, fresh and dried fruits and vegetables, nuts, cereals, processed foods, and other food products. As an emerging mycotoxin, studies reporting AOH occurrence levels have started to increase in number in the last five years. This can be inferred from the number of articles appearing in the database search per year. Studies related to AOH occurrence are mainly conducted in China and in some European counties (Fig. ), around 26% of the data covered in Table is reported from China. Among the four records of AOH levels in apples, the highest level was indicated in samples collected in China, with an AOH average level of 935.96 ± 178.37 µg/Kg, followed by samples from Italy, with an average occurrence level of 159.90 ± 6.92 µg/Kg. As for apple juice samples, five records were included, with the highest average level in samples from Spain (207.00 ± 12 μg/L). Note that these apple juice samples showed the highest levels of AOH among all recorded beverages. Among cereals, barley from Argentina showed the highest contamination levels. Nine records of AOH levels in wheat are included in Table , the highest AOH average level is in samples collected from Slovenia (39.00 ± 1 µg/Kg), and the lowest is in samples evaluated in Canada with an average level of 2.20 ± 3.3 µg/Kg. Although AOH is an emerging mycotoxin with no regulated levels in food, the EU set recommendations for its levels in some food categories. In this study, the levels of AOH were recorded in 18 commodities belonging to the EU-indicated food categories, among which only four have levels exceeding the recommended levels (22.2%) (Table ). Considering the toxicity of this mycotoxin and the widespread occurrence of AOH in food products intended for human consumption as shown in Table , it is important to have more toxicological studies on other food production stages such as in-field, during transportation and during storage (Escrivá et al. ).
A recent detailed toxigenic profile of AOH and its metabolites using an in silico working model, based on the MetaTox, Swiss ADME, pKCMS, and PASS online computational programs, has confirmed the known cytotoxic, mutagenic, carcinogenic, and endocrine disruptor effects of mycotoxin. The computational model has also predicted other toxicological endpoints for AOH including vascular toxicity, hematotoxicity, diarrhea, and nephrotoxicity (Marin and Taranu ). Alternariol has a potential influence on immune system response. Suppressing the pro-inflammatory responses in human epithelial cells and in human macrophages has been described in the literature (Aichinger et al. ). In addition, Alternaria toxins have a direct effect on the gut microbiome, by affecting the viability of certain strains that usually colonize the gut and play a crucial role in the function of the digestive system (Aichinger et al. ). The chemical structure of AOH has similarities with natural and synthetic estrogen, which suggest an endocrine disruptive role of AOH. Lehmann et al. were the first to describe the effect of AOH on endocrine pathways by binding it to and activating estrogen receptors (ER). Recent studies showed that the endocrine activity of AOH and its conjugated forms is more complex than previously described. One of the recent findings is the action of AOH as an androgen receptor agonist (Aichinger et al. ). More controversially, a recent study shows that Alternaria culture extracts have an anti-estrogenic activity. This toxicological effect contradicts the estrogen-mimic effect shown by AOH; however, this can be explained by the ability of perylene quinones compounds within Alternaria mycotoxins to interact with aryl hydrocarbon receptor (AhR), which is a key regulator of phase I xenobiotic metabolism. This interaction might degrade ERs or at least modify ER-related signaling (Aichinger et al. ).
The toxicity of AOH has been explored using animal models such as mice, rats and zebrafish. The exposure route used was mainly through ingestion by adding the mycotoxin to the food or water of the animals. Table summarizes the results of animal model studies in the last two years. It is important to highlight the study conducted by EFSA on AOH toxicokinetic. Oral application of 2000 mg/kg of AOH to NMRI mice showed low absorption of the mycotoxin, as 90% of the admitted dose was recovered from feces, while only 0.06% was encountered in blood. However, it should be noted that a possible digestive tract inflammation might increase absorption and lead to higher toxicity (Schuchardt et al. ).
Toxicological data about AOH are limited, with a lack of good bioavailability and long-term clinical studies. Little is known about exact toxicity mechanisms, bioavailability and stability of AOH in the digestive system. However, Alternaria mycotoxins in general have been proven to cause adverse health effects in animals, including cytotoxicity, fetotoxicity and teratogenicity. They are also mutagenic, clastogenic, and estrogenic in microbial and mammalian cell systems and tumorigenic in rats (Escrivá et al. ). Among the suggested toxicity mechanisms of Alternaria toxins is their ability to alter cell membrane fluidity in intestinal cells, which directly affect the function of the gastrointestinal track (Aichinger et al. ). The occurrence of Alternaria mycotoxins has been correlated with esophageal cancer, although other mycotoxins were coexisting in countries with high incidence of esophageal cancer (Solhaug et al. ). Many studies have evaluated the effect of AOH on cells. The accumulative data of cell toxicity clearly suggests that this mycotoxin has adverse effects on various cells as summarized in Table . In addition to the individual effects of AOH on cells, some studies have shown the synergic effects of AOH with other mycotoxins, the effect of specific ratios of the two Alternaria mycotoxins AOH and ATX on cells lines HepG2, HT29, and HCEC-1CT has additional cytotoxicity compared to the effect of each of the mycotoxins individually (Vejdovszky et al. ). It can be inferred from the data in Table that AOH has genotoxicity, and that it can damage DNA at multiple levels causing single-stranded DNA breaks (SSB) and double-stranded DNA breaks (DSB) together with DNA oxidative damage. Alternariol genotoxicity was first observed by Pfeiffer et al. and then further demonstrated by others. Alternariol metabolism is known to lead to the production of catechols and quinones, such reactive metabolites can undergo redox cycling resulting in reactive oxygen species (ROS) generation. They can also covalently bind to DNA to cause damage (Fernández-Blanco et al. ). Many of the ROS associated intracellular events have been spotted in cells exposed to AOH. However, the addition of antioxidants does not modify downstream AOH exposure consequences, including cell cycle arrest. This implies initial mechanisms being involved in AOH genotoxicity prior to ROS production (Solhaug et al. ). Topoisomerases are crucial enzymes in DNA replication and translation as they facilitate chromosome untangling. Alternariol has been proven to inhibit topoisomerase enzymes’ function and to stabilize the intermediate covalent topoisomerase-DNA binding. This leads to DSB and therefore to genotoxicity that can lead to cell cycle arrest (Aichinger et al. ; Pinto and Patriarca ). Downstream, the DNA damage response pathway has been proven to be activated upon cell exposure to AOH. This is mainly p53 activation, which is a major protein that regulates DNA repair, cell cycle arrest, apoptosis, autophagy, and senescence, and an indicator of carcinogenicity (Solhaug et al. ). The activation of p53 leads to increased levels of proteins that repair cell damage, including proliferating cell nuclear antigen (PCNA) which increased levels of p21 (Solhaug et al. ). Exposure to AOH also increases intracellular levels of cyclin B, which can lead to cell cycle arrest. It activates AMP-activated protein kinase (AMPK) which usually functions as cellular energy sensor and decreases the activation of the mammalian target of rapamycin (mTOR), which usually regulates cell growth and survival. This signaling pathways would lead to cell autophagy and senescence (Solhaug et al. ). Beside genotoxicity, AOH is known to act as an endocrine disruptor by mimicking estrogen and activating androgen receptors. Androgen/estrogen imbalance and inflammation were observed in prostate cancer in a recent study evaluating different doses of AOH on prostate epithelial cells. At a high dose of 10µM, AOH induced oxidative stress, DNA damage and cell cycle arrest. Interestingly, these effects were proven to be partially mediated by the activation of ERβ, indicating the role of estrogen-mimic in cytotoxicity and genotoxicity of AOH (Kowalska et al. ).
Alternariol and/or its derivatives have shown potential anticancer effects when investigated in a number of preclinical studies. Scientific results indicate that this form of mycotoxin exhibits anticancer effectiveness through several pathways, including cytotoxicity, oxidative stress by ROS, cell cycle arrest, apoptotic cell death, genotoxicity, anti-proliferation, autophagy, and estrogenic mechanisms. All previously discussed AOH toxicity mechanisms may apply to cancer cells, which made scientists explore it as a possible chemotherapy (Islam et al. ). Chemotherapy is a type of anticancer treatment using single or combined chemical components that kill or stop the multiplication and proliferation of cancer cells (Patyal et al. ). Due to varied toxicity mechanisms of mycotoxins, those fungal metabolites have recently become the center of attention for scientists working on the development of novel anticancer drugs (de Menezes et al. ). Furthermore, mycotoxins are heat-resistant, stable compounds which add to their value as possible anticancer medications (Jafarzadeh et al. ). Among the mechanisms involved in AOH anticancer effectiveness is its cytotoxicity. Cytotoxicity is the first characteristic evaluated in a chemical when considered as an anticancer drug (Anca Oana et al. ). Alternariol cytotoxic effect has been demonstrated in many studies. As an example, AOH showed cytotoxic effects on A549 lung cancer cell line and it also improved carcinoma in bulb/c mice models (Li et al. ). Alternariol has also been demonstrated to induce oxidative stress in cancer cells. Studies showing AOH ROS generation are numerous. Starting with Bensassi et al , AOH showed a dose-dependent ROS generation, leading to mitochondrial dysfunction-dependent cytotoxic effects in human colon carcinoma (HCT116) cells (Bensassi et al. ). Among the anticancer mechanisms, apoptosis is a form of programmed cell death that occurs in human cells, in response to any internal or external cell disturbing event (Fernández-Lázaro et al. ). Many anticancer agents are designed to initiate apoptosis in tumor cells. AOH has been demonstrated to induce apoptosis in a mitochondria-dependent pathway, characterized by a p53 activation (Bensassi et al. ). Anticancer drugs can also act by exerting genotoxic and mutagenic effects on cancer cells. As previously discussed, AOH is known for its genotoxicity in both normal and cancer cells (Crudo et al. ). Anti-proliferative effect is one of the desired mechanisms in an anticancer drug. Previous studies have shown that AOH exerts an anti-proliferative effect in CaCo-2 cells (Vila-Donat et al. ). Explored anticancer agents are also studied as autophagy inducers in cancer cells (Kamalzade et al. ). A previous study on RAW264.7 macrophage cells showed a dose-dependent increase in autophagy marker LC3 when treated with different concentrations of AOH (Solhaug et al. ). Despite the promising anticancer mechanisms, there are many therapeutic limitations of mycotoxins as anticancer drugs. Limitations include insufficient knowledge of the pharmacokinetics, solubility, and the metabolism of AOH. The main concern in this approach is the insufficient understanding on how AOH would molecularly target tumor cells without causing systematic toxicity to the body (Islam et al. ).
Co-infection of some crops such as grains, pome fruits, and grapes with Alternaria and other toxigenic strains such Fusarium , Penicillium and Aspergillus is common. Therefore, the co-occurrence of Alternaria toxins with other mycotoxins is likely to occur, which makes risk assessment difficult to perform due to the adverse synergic effect that this combination can have on human health (Nan et al. ). Alternariol (AOH) is stable at pH 5 and it can be degraded by 0.18 M phosphate/citrate buffer pH 7 into 6-methylbiphenyl-2,3′,4,5′-tetrol (Siegel et al. ). Alternariol also shows stability during pasteurization (Elhariry et al. ). Levels of mycotoxins might change during food processing, based on their stability. Surprisingly, clarification of pomegranate juice has been shown to increase AOH levels. This might be due to the presence of conjugated forms of AOH in the juice, which ends up being cleaved into free mycotoxins upon clarification, using proteolytic enzymes (Elhariry et al. ). The application of antioxidants, such as N -acetylcysteine (NAC) and ascorbic acid (vitamin C), was not useful in avoiding AOH cell cycle arrest and autophagy effects on cells, which implies initial mechanisms involved in AOH genotoxicity (Chain ).
Mycotoxins are concerning natural contaminants that occur in agricultural products and that have adverse human and animal health effects. There is a continuous search for effective prevention measures and control strategies to reduce the levels and therefore the toxicity of these mycotoxins (Awuchi et al. ). Strategies to control fungal growth in the first place are among the most effective. However, adverse effects of pesticides on both human health and the environment makes this control controversial (Saleh and Goktepe ). Alternatively, scientists are exploring natural products as biological controllers to replace commonly used chemicals. Many studies have shown success in controlling Alternaria species in fruits and vegetables using natural oils, plant extracts, bacterial bacteriocins, fungal extracts, algal extracts, and others. Further efforts are to be directed toward the commercialization of these findings (Saleh and Abu-Dieyeh ). Among the successfully described AOH control methods, extrusion showed an AOH level reduction by up to 87%, if processing conditions are optimized (Janić Hajnal et al. ). Arginine has also been proven to reduce AOH biosynthesis when applied to fruits at the post-harvest level (Touhami et al. ). As AOH has three OH groups in its structure, it can be easily oxidized using cold plasma. However, this method is limited by the low penetration rate of the reactive species responsible for mycotoxins degradation, which keeps its application at the level of superficial food contamination (Ravash et al. ). Cold plasma showed a good degradation rate of AOH (up to 60%) in wheat flour samples (Doshi and Šerá ). A more sophisticated technique involves dielectric barrier discharge cold plasma, which increases the degradation rate of AOH to 100%, as shown by Wang et al. . Ultraviolet radiation treatment using UVC has shown in some studies a high rate of AOH concentration reduction of up to 80% (Lopes et al. ). A recent study has shown the effectiveness of β-cyclodextrin bead polymer (BBP) treatment in reducing AOH levels in red wine (Fliszár-Nyúl et al. ). Many Bacillus species have been evaluated for their potential as bio-controllers to regulate the growth of fruit and vegetable spoiling agents, and to produce metabolites that can be used in mycotoxins degradation. Bacillus licheniformis in particular has shown a high rate of AOH enzymatic degradation by CotA laccase production (Veras et al. ). In some cases, food processing steps lead to AOH reduction. For example, dough fermentation for 48 h at 25 °C successfully reduces the level of AOH by 41.5% (Janić Hajnal et al. ).
Anthropogenic activities and global megatrends have affected the geographic distribution of mycotoxin-producing fungi. Globalization has facilitated the introduction of additional fungal strains to new destinations. Global warming has led to increased levels of mycotoxins, including AOH in fields and during storage. Different detection techniques have been developed to evaluate mycotoxins in food. However, AOH exists in many masked formed, combined with other metabolites. Knowing that masked mycotoxins cannot yet be detected by conventional methods, they can be metabolized back into their native form in the body, add to their risks. Future efforts should focus on the development of detection tools that cover mycotoxins in all their forms. Food-processing stages are usually not enough to lower the levels of AOH in the final food product. Additional treatments are usually needed, and the literature shows that various techniques have been successfully described to control AOH in food products. However, the consideration of this mycotoxin within the emerging mycotoxins and the lack of studies that focus on the detection of masked forms of mycotoxins is leading to a wider spread of AOH around the world. As an emerging mycotoxin, levels of AOH in food are not yet regulated. However, the literature shows that exposure levels can reach between 3.8 and 71.6 ng/Kg bw/day, which is above the threshold for the toxicological effect of potential genotoxic substances at 2.5 ng/Kg bw/day. The groups who are most at risk of AOH exposure are those who consume large quantities of fruits and vegetables; notably cereal-based foods, and tomato-based products. Although tolerable levels of AOH have not yet been set, the application of a threshold of toxicological concern (TTC) approach by EFSA indicates a concern when it comes to human exposure to AOH. A ten-year surveillance table was developed in this review to summarize the reported occurrences of AOH in food products around the world. Such surveillances are crucial in raising awareness and in supporting health risk assessors. The data show TTC exceeding levels in four studies conducted on samples from Spain, Germany, Argentina, and South Africa. Exposure of animal models to AOH showed adverse health effects, which have led to death at higher doses. Cytotoxicity of AOH has been widely evaluated and latest literature gathered in this review shows genotoxicity by direct combination with DNA, causing single-stranded DNA breaks (SSB) and double-stranded DNA breaks (DSB). Proven cytotoxicity mechanisms include the generation of reactive oxygen species (ROS) in cells exposed to AOH. Some studies have explored the usage of AOH as an anticancer treatment to induce apoptosis and autophagy of cancer cells. However, targeting only tumor cells is the main therapeutic limitation of this approach. The consistency of the evidence collected and the findings of studies have proven that AOH exposure is cytotoxic, carcinogenic and has endocrine disruptor effects. Therefore, the levels of AOH in food products and its risks on human health, require further attention, especially among the populations at risk. It is important to get protected from such a widespread occurring toxicant associated with a range of agricultural and food-based products relevant to human diet. The use of the information presented in this review will lead to a better understanding of AOH as a toxicant. The analysis of the occurrence data gathered will give future health risk assessors solid results that can either be used in recommending further occurrence surveillances or used to set exposure levels and maximum tolerable level of AOH in the near future. This will lead to the application of the most effective preventive measures to protect humans from any possible adverse effects.
|
The | 20e1c5e8-303e-4c9c-bfa5-3143a4576f1c | 11895853 | Biochemistry[mh] | Ethics Statement The human tissue used in this study was handled under the guidelines of the Declaration of Helsinki. Institutional review board approvals for research involving human subjects were obtained from the Friedrich-Alexander University Erlangen-Nürnberg (Applied number: 140_20 B) and Doshisha University (Applied number: 20009). Informed consent was acquired from patients with FECD who underwent Descemet's membrane endothelial keratoplasty at the Friedrich-Alexander University Erlangen-Nürnber. Patients who were unable to provide informed consent, prisoners, and vulnerable populations were excluded from the study. Additionally, patients with advanced FECD, for whom insufficient corneal endothelial cells could be collected for RNA sequencing analysis, were also excluded. Stripped Descemet's membranes, including corneal endothelial cells, were obtained following the surgery. Culture of Corneal Endothelial Cells Derived From a Patient With FECD Immortalized corneal endothelial cells derived from patients with FECD (iFECD) were established previously and used in this study. The iFECD cells were cultured in Dulbecco's Modified Eagle Medium (DMEM; Nacalai Tesque, Kyoto, Japan) containing 10% fetal bovine serum (FBS) and 1% penicillin and streptomycin (Nacalai Tesque). When the cells reached 80% confluency, they were passaged using 0.05% Trypsin-EDTA (Nacalai Tesque). For some experiments, iFECD cells were cultured until 80% confluency and further cultured with fresh DMEM without FBS supplemented with 10 ng/mL TGF-β2 (Wako Pure Chemical Industries, Ltd., Osaka, Japan) for 24 hours. Knockout of the TCF4 Gene Using the CRISPR–Cas9 System The basic helix–loop–helix (bHLH) in TCF4 of iFECD or 20 bases in exon 9 in TCF4 of iFECD were knocked out using CRISPR/Cas9 (hereafter, iFECD TCF4 ΔbHLH, iFECD TCF4 −/− ). Guide RNA (gRNA) for CRISPR–Cas9 was designed on Feng Zhang's website ( http://crispr.mit.edu/ ; Massachusetts Institute of Technology; site no longer active). The insert oligonucleotides for bHLH in TCF4 deletion gRNA-1 were 5′-CACCGCCACAGCAATAATGACGATG-3′ and 5′-AAACCATCGTCATTATTGCTGTGGC-3′, and for bHLH in TCF4 deletion gRNA-2, they were 5′-CACCGAGTCTGGAGCAGCAAGTCCG-3′ and 5′-AAACCGGACTTGCTGCTCCAGACTC-3′ for the TCF4 gene (Gene ID: 6925). Insert oligonucleotides for 20 bases in exon 9 in TCF4 deletion gRNA-1 were 5′-CACCGGACTACAAATAGGGACTCGCC-3′ and 5′- AAACGGCGAGTCCCTATTGTAGTC-3′, and insert oligonucleotides for 20 bases in exon 9 in TCF4 deletion gRNA-2 were 5′-CACCGCAAGCACTGCCGACTACAAT-3′ and 5′- AAACATTGATGTCGGCAGTGCTTG-3′ for the TCF4 gene. The complementary oligonucleotides for gRNA were annealed and cloned into lentiCRISPR v2, gifted from Feng Zhang (Addgene plasmid #52961; http://n2t.net/addgene:52961 ; RRID:Addgene_52961; Addgene, Watertown, MA, USA). The insertions of the gRNAs were assessed using Sanger sequencing (SeqStudio Gentic Analyzer, Thermo Fisher Scientific, Waltham, MA, USA). Each plasmid vector was cotransfected with psPAX2 (Plasmid #12260; Addgene) and pCMV-VSV-G (Plasmid #8454; Addgene) into 293T cells using OptiMEM-I with Lipofectamine 3000 (Thermo Fisher Scientific). Lentiviral supernatants were harvested after 24 hours and concentrated using Lenti-X Concentrator (Clontech Laboratories, Inc., Mountain View, CA, USA) according to the manufacturer's protocol. iFECD cells were cultured in 6-well plates to ∼70% confluency with DMEM supplemented with 10% FBS and penicillin/streptomycin (Nacalai Tesque). Lentiviral concentrates (100 µL), polybrene (5 µg/mL; Nacalai Tesque), and puromycin (1 µg/mL; InvivoGen, San Diego, CA, USA) were added to the culture medium, and iFECD cells were further cultured. After 5 days, the surviving cells were collected and cultured as single cells in 96-well plates to establish single-cell clones. The single-cell clones were isolated and passaged after 14 to 17 days of culture. Genomic DNA Analysis and Sequencing Cultured cells were harvested using 0.05% Trypsin-EDTA, centrifuged, and then lysed using a MonoFas gDNA Cultured Cells Extraction Kit VI (Animos, Saitama, Japan) to extract DNA. Forward primer (5′-CTTACTCCTGTTAAGCTGCCTTG -3′) and reverse primer (5′-CTAAATCCATAAGGCAGCATCCC -3′) were used to confirm the deletion of bHLH. The PCR products were amplified using a T3000 thermocycler (Analytik jena, Jena, Germany) under the following conditions: 35 cycles of denaturation at 95°C for 20 seconds, annealing at 55°C for 20 seconds, and elongation at 72°C for 20 seconds. The PCR amplicons were subjected to electrophoretic separation on 1% agarose gels, followed by staining with ethidium bromide, and visualized under ultraviolet light using an Amersham Imager 600 (GE Healthcare, Chicago, IL, USA). The PCR amplicons were purified using ExoSAP-IT (Thermo Fisher Scientific). The sequence of the treated PCR products was confirmed by Sanger sequencing (SeqStudio Gentic Analyzer, Thermo Fisher Scientific) with the following primers: forward primer (5′-CTTACTCCTGTTAAGCTGCCTTG-3′) and reverse primer (5′-CTAAATCCATAAGGCAGCATCCC-3′) for iFECD TCF4 ΔbHLH and forward primer (5′- GTAAAACGACGGCCAGT-3′) and reverse primer (5′-CAGGAAACAGCTATGAC-3′) for iFECD TCF4 −/− . Protein Isolation for Mass Spectrometry The iFECD and iFECD TCF4 ΔbHLH cells were washed with PBS, detached using TrypLE (Thermo Fisher Scientific), and washed again three times with PBS. The cell pellets were flash frozen in liquid nitrogen and preserved at −80°C for future analysis. The cell pellets were lysed by sonication in a buffer containing 2% SDS and 50 mM triethylammonium bicarbonate, supplemented with Halt Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher Scientific). After sonication, the lysates were centrifuged, and the supernatant was collected for protein quantification using the BCA protein assay. Protein quality was verified by electrophoresis of 20 µg protein on a 10% SDS-PAGE gel. Reduction and alkylation of proteins were achieved by treating the samples with 5 mM dithiothreitol at 60°C for 1 hour, followed by 10 mM iodoacetamide at room temperature for 30 minutes in the dark. The proteins were precipitated using ice-cold acetone and an incubation period of 12 hours at 4°C, after which the samples were centrifuged, and the resultant pellet was resuspended in 50 mM triethylammonium bicarbonate. This was followed by enzymatic digestion with trypsin (Promega, Madison, WI, USA) for 12 hours. The resulting peptides were purified using a Sep-Pak C 18 Plus Light Double Luer-Lock Cartridge (Waters, Milford, MA, USA). The digested peptides were acidified with 1% formic acid and centrifuged, and the supernatants were collected. A Sep-Pak column was activated using 100% acetonitrile, followed by 0.1% formic acid, and then acidified peptide samples were loaded onto the column, washed with 0.1% formic acid, and eluted with 40% acetonitrile in 0.1% formic acid. Following elution, the peptides were dried and resolubilized in 100 mM triethylammonium bicarbonate buffer (TEAB) and subsequently labeled with TMT10plex Isobaric Label Reagents and Kits (Thermo Fisher Scientific), following the manufacturer's instructions. Basic pH Reverse Phase Liquid Chromatography Fractionation The labeled peptides were solubilized in 1 mL basic pH RPLC solvent A (7 mM TEAB, pH 8.5) and fractionated by basic pH reverse phase liquid chromatography (bRPLC) on an XBridge BEH C 18 Column (Waters), employing a progressively increasing gradient of bRPLC solvent B (7 mM TEAB, pH 8.5, 90% acetonitrile), utilizing an Agilent 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA). The flow rate for the mobile phase was set at 0.3 mL/min, and the eluted peptides were monitored by absorbance changes at 280 nm. The procedure was completed over a total duration of 90 minutes, yielding a collected volume of 27 mL. Subsequently, the 96 fractions were consolidated into 12 fractions and vacuum dried. Liquid Chromatography/Tandem Mass Spectrometry Analysis Lyophilized peptides were resuspended in 0.1% formic acid and analyzed using an Orbitrap Fusion Lumos Mass Spectrometer (Thermo Fisher Scientific) interfaced with an Easy-nLC 1200 nanoflow liquid chromatography system (Thermo Fisher Scientific). The peptides were applied to a precolumn (nanoViper; 100 µm × 20 mm, Thermo Fisher Scientific) at a flow rate of 3 µL/min for enrichment and subsequently separated on an analytical column (HPLC Column Acclaim RSLC 120 C18, 75 µm × 50 cm; Thermo Fisher Scientific) at a flow rate of 280 nL/min. The elution was performed using a step gradient of 8% to 22% solvent (0.1% formic acid in 95% acetonitrile) over 70 minutes, followed by an increase to 22% to 35% solvent for a duration of 70 to 103 minutes. The total acquisition time was set at 120 minutes. The mass spectrometer was operated in a data-dependent acquisition mode. Survey full-scan mass spectrometry (MS) (from m/z 350–1600) was acquired in the Orbitrap at a resolution of 120,000 at 200 m/z . The AGC target for MS1 was set at 4 × 10 5 and the ion filling time was set at 50 ms. The most intense ions with charge state ≥2 were isolated with isolation window 1.6 in a 3-second cycle and fragmented using higher-energy collisional dissociation (HCD) fragmentation with 34% normalized collision energy and detected at a mass resolution of 50,000 and an ion injection time of 100 ms. Analysis of DEPs For protein identification and quantification, the SEQUEST search algorithm was employed using Proteome Discoverer software against the Human RefSeq protein database. The search parameters included a maximum of two missed cleavages. Carbamidomethylation at cysteine and TMT 10-plex (+229.163) modification at the N-terminus of peptide and lysine were set as fixed modifications, while oxidation of methionine was a variable modification. For MS data, monoisotopic peptide mass tolerance was set to 10 ppm and MS/MS tolerance to 0.1 Da. A false discovery rate of 1% was set at the peptide-spectrum match level as well as at 1% at the protein level. Subsequent analyses were conducted using Perseus software to compute fold changes and P values through t -tests, with fold changes undergoing logarithmic transformation to the log 2 scale. The criteria for identifying DEPs included thresholds of |log 2 fold changes| (≥0.5) and P values (<0.05). A volcano plot, integrating log 2 fold changes and P values, was generated to depict the distribution of each protein, utilizing the ggplot2 package in R. Proteins upregulated in iFECD TCF4 ΔbHLH relative to iFECD were marked with red dots, whereas downregulated proteins were denoted with blue dots. Additionally, heatmap clustering was performed using the heatmap.2 function within the gplot package for R, with all protein expression levels normalized to z -scores and illustrated across a spectrum from +2 to −2. Red stripes represented relatively high expressions, and blue stripes indicated relatively low expressions. Functional Enrichment and Protein–Protein Interaction Analyses Gene Ontology (GO) analysis was performed using the ClusterProfiler package (version 4.2.2) in R. Significantly enriched GO terms were determined with a P value threshold of <0.05. The top 12 GO terms, representing biological processes (BP), cellular components (CC), and molecular functions (MF), were selected and graphically visualized using the ggplot2 package (version 3.3.6) in R. For pathway-based enrichment analysis, Reactome and Kyoto Encyclopedia of Genes and Genomes (KEGG) , analyses were also conducted. KEGG pathway analysis was conducted with the ClusterProfiler package and illustrated using the ggplot2 package in R. Reactome pathway analysis was carried out using the ReactomePA (version 1.38.0) and ggplot2 packages. Significantly enriched pathways, identified with a P < 0.05, were visually presented, showcasing the top 12 pathways based on their significant gene ratio on the x-axis. P values were converted with “–log 10 ,” then displayed with colors ranging from blue to red using the scales package. For protein–protein interaction (PPI) networks, GeneMANIA ( http://genemania.org/ ), an accessible online tool, was employed. Confirmation of Altered ECM-Related Molecules at the mRNA Level Using RNA Sequencing Data Our RNA sequencing (RNA-seq) data for the corneal endothelium derived from patients with FECD and healthy subjects were obtained from the DDBJ database. Two other RNA-seq data sets available at the GEO repository were also downloaded. , Data preprocessing was conducted utilizing fastp for the removal of adapter bases and low-quality reads. The refined reads were then mapped to the reference genome via the STAR alignment tool, with gene expression quantification achieved through RSEM. , Differential gene expression analysis was performed employing the DESeq2 package in R, applying criteria for adjusted P values to compare gene expression in the corneal endothelium of patients with FECD against gene expression in healthy controls. The expression levels for specific genes of interest were visualized by constructing boxplots in R utilizing the ggplot2 package. Immunocytochemistry and Aggresome Staining Cells were fixed with 4% paraformaldehyde for 10 minutes, permeabilized using 1% Triton X-100 (Nacalai Tesque), and subsequently blocked with 2% bovine serum albumin to prevent nonspecific binding. The samples were incubated overnight at 4°C with primary antibodies against fibronectin (dilution 1:1000; BD Biosciences, Franklin Lakes, NJ, USA). Alexa Fluor 488–conjugated goat anti-mouse antibodies (Life Technologies, Carlsbad, CA, USA) were used as secondary antibodies, applied at a dilution of 1:1000 and incubated at 37°C for 45 minutes. Aggresomes were identified using an aggresome-specific reagent (dilution 1:1000; Enzo Life Science, Farmingdale, NY, USA) at 37°C for 45 minutes. Nuclei were stained with DAPI (Vector Laboratories, Carlsbad, CA, USA). Fluorescence microscopy analysis was conducted using a DM 2500 microscope (Leica Microsystems, Wetzlar, Germany). Colocalization analysis was performed using the ImageJ software (version 1.54f; National Institutes of Health, Bethesda, MD, USA). Manders’s coefficients were calculated to quantify the degree of colocalization between aggresome and fibronectin signals. Western Blotting The cells from iFECD, iFECD TCF4 −/− , and iFECD TCF4 ΔbHLH were rinsed with ice-cold PBS and lysed using ice-cold radioimmunoprecipitation assay buffer supplemented with phosphatase inhibitor cocktail 2 (MilliporeSigma, Burlington, MA, USA) and a protease inhibitor cocktail (Roche Applied Science, Penzberg, Germany). The lysates were centrifuged at 800 × g for 10 minutes, and the concentration of total proteins in the supernatants was determined utilizing the BCA Protein Assay Kit (Thermo Fisher Scientific). The proteins were then separated by SDS-PAGE and transferred onto PVDF membranes, which were then blocked with 3% nonfat dry milk for 1 hour at room temperature and incubated overnight at 4°C with primary antibodies against cleaved caspase-3 (1:1000; Cell Signaling Technology, Danvers, MA, USA), cleaved poly (ADP-ribose) polymerase (cleaved PARP) (1:1000; Cell Signaling Technology), glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (1:3000; Medical & Biological Laboratories Co., Ltd., Tokyo, Japan), TCF4 (1:500), Snail1 (1:1000; Cell Signaling Technology), ZEB1 (1:1000; Cell Signaling Technology), fibronectin (1:20,000; BD Biosciences), phosphorylated Smad3 (p-Smad3) (1:1000; Cell Signaling Technology), Smad2 (1:1000; Cell Signaling Technology), phosphorylated Smad2 (p-Smad2) (1:1000; Cell Signaling Technology), and Smad3 (1:1000; Cell Signaling Technology). Following primary antibody incubation, the blots were washed and incubated with horseradish peroxidase–conjugated secondary antibodies (1:5000; GE Healthcare, Chicago, IL, USA) and visualized using luminal-based enhanced chemiluminescence with the ECL Advanced Western Blotting Detection Kit (Nacalai Tesque). The relative density of immunoblot bands from Western blot analyses was quantified using ImageJ software. Flow Cytometry For flow cytometry analysis, control and TGF-β2–treated cells were stained with DMEM containing Annexin V (Medical & Biological Laboratories Co., Ltd.) for 15 minutes and harvested using Accumax (Innovative Cell Technologies, San Diego, CA, USA). Flow cytometric analysis was performed using CellQuest Pro software (BD Biosciences) for data acquisition and analysis. Statistical Analysis All statistical analyses were performed using R software. For comparisons between two groups, statistical significance was assessed using Student's t -test. For multiple group comparisons, Dunnett's multiple-comparisons test was applied. Statistical significance was defined as P < 0.05 for all analyses. Results are presented as mean ± SEM.
The human tissue used in this study was handled under the guidelines of the Declaration of Helsinki. Institutional review board approvals for research involving human subjects were obtained from the Friedrich-Alexander University Erlangen-Nürnberg (Applied number: 140_20 B) and Doshisha University (Applied number: 20009). Informed consent was acquired from patients with FECD who underwent Descemet's membrane endothelial keratoplasty at the Friedrich-Alexander University Erlangen-Nürnber. Patients who were unable to provide informed consent, prisoners, and vulnerable populations were excluded from the study. Additionally, patients with advanced FECD, for whom insufficient corneal endothelial cells could be collected for RNA sequencing analysis, were also excluded. Stripped Descemet's membranes, including corneal endothelial cells, were obtained following the surgery.
Immortalized corneal endothelial cells derived from patients with FECD (iFECD) were established previously and used in this study. The iFECD cells were cultured in Dulbecco's Modified Eagle Medium (DMEM; Nacalai Tesque, Kyoto, Japan) containing 10% fetal bovine serum (FBS) and 1% penicillin and streptomycin (Nacalai Tesque). When the cells reached 80% confluency, they were passaged using 0.05% Trypsin-EDTA (Nacalai Tesque). For some experiments, iFECD cells were cultured until 80% confluency and further cultured with fresh DMEM without FBS supplemented with 10 ng/mL TGF-β2 (Wako Pure Chemical Industries, Ltd., Osaka, Japan) for 24 hours.
TCF4 Gene Using the CRISPR–Cas9 System The basic helix–loop–helix (bHLH) in TCF4 of iFECD or 20 bases in exon 9 in TCF4 of iFECD were knocked out using CRISPR/Cas9 (hereafter, iFECD TCF4 ΔbHLH, iFECD TCF4 −/− ). Guide RNA (gRNA) for CRISPR–Cas9 was designed on Feng Zhang's website ( http://crispr.mit.edu/ ; Massachusetts Institute of Technology; site no longer active). The insert oligonucleotides for bHLH in TCF4 deletion gRNA-1 were 5′-CACCGCCACAGCAATAATGACGATG-3′ and 5′-AAACCATCGTCATTATTGCTGTGGC-3′, and for bHLH in TCF4 deletion gRNA-2, they were 5′-CACCGAGTCTGGAGCAGCAAGTCCG-3′ and 5′-AAACCGGACTTGCTGCTCCAGACTC-3′ for the TCF4 gene (Gene ID: 6925). Insert oligonucleotides for 20 bases in exon 9 in TCF4 deletion gRNA-1 were 5′-CACCGGACTACAAATAGGGACTCGCC-3′ and 5′- AAACGGCGAGTCCCTATTGTAGTC-3′, and insert oligonucleotides for 20 bases in exon 9 in TCF4 deletion gRNA-2 were 5′-CACCGCAAGCACTGCCGACTACAAT-3′ and 5′- AAACATTGATGTCGGCAGTGCTTG-3′ for the TCF4 gene. The complementary oligonucleotides for gRNA were annealed and cloned into lentiCRISPR v2, gifted from Feng Zhang (Addgene plasmid #52961; http://n2t.net/addgene:52961 ; RRID:Addgene_52961; Addgene, Watertown, MA, USA). The insertions of the gRNAs were assessed using Sanger sequencing (SeqStudio Gentic Analyzer, Thermo Fisher Scientific, Waltham, MA, USA). Each plasmid vector was cotransfected with psPAX2 (Plasmid #12260; Addgene) and pCMV-VSV-G (Plasmid #8454; Addgene) into 293T cells using OptiMEM-I with Lipofectamine 3000 (Thermo Fisher Scientific). Lentiviral supernatants were harvested after 24 hours and concentrated using Lenti-X Concentrator (Clontech Laboratories, Inc., Mountain View, CA, USA) according to the manufacturer's protocol. iFECD cells were cultured in 6-well plates to ∼70% confluency with DMEM supplemented with 10% FBS and penicillin/streptomycin (Nacalai Tesque). Lentiviral concentrates (100 µL), polybrene (5 µg/mL; Nacalai Tesque), and puromycin (1 µg/mL; InvivoGen, San Diego, CA, USA) were added to the culture medium, and iFECD cells were further cultured. After 5 days, the surviving cells were collected and cultured as single cells in 96-well plates to establish single-cell clones. The single-cell clones were isolated and passaged after 14 to 17 days of culture.
Cultured cells were harvested using 0.05% Trypsin-EDTA, centrifuged, and then lysed using a MonoFas gDNA Cultured Cells Extraction Kit VI (Animos, Saitama, Japan) to extract DNA. Forward primer (5′-CTTACTCCTGTTAAGCTGCCTTG -3′) and reverse primer (5′-CTAAATCCATAAGGCAGCATCCC -3′) were used to confirm the deletion of bHLH. The PCR products were amplified using a T3000 thermocycler (Analytik jena, Jena, Germany) under the following conditions: 35 cycles of denaturation at 95°C for 20 seconds, annealing at 55°C for 20 seconds, and elongation at 72°C for 20 seconds. The PCR amplicons were subjected to electrophoretic separation on 1% agarose gels, followed by staining with ethidium bromide, and visualized under ultraviolet light using an Amersham Imager 600 (GE Healthcare, Chicago, IL, USA). The PCR amplicons were purified using ExoSAP-IT (Thermo Fisher Scientific). The sequence of the treated PCR products was confirmed by Sanger sequencing (SeqStudio Gentic Analyzer, Thermo Fisher Scientific) with the following primers: forward primer (5′-CTTACTCCTGTTAAGCTGCCTTG-3′) and reverse primer (5′-CTAAATCCATAAGGCAGCATCCC-3′) for iFECD TCF4 ΔbHLH and forward primer (5′- GTAAAACGACGGCCAGT-3′) and reverse primer (5′-CAGGAAACAGCTATGAC-3′) for iFECD TCF4 −/− .
The iFECD and iFECD TCF4 ΔbHLH cells were washed with PBS, detached using TrypLE (Thermo Fisher Scientific), and washed again three times with PBS. The cell pellets were flash frozen in liquid nitrogen and preserved at −80°C for future analysis. The cell pellets were lysed by sonication in a buffer containing 2% SDS and 50 mM triethylammonium bicarbonate, supplemented with Halt Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher Scientific). After sonication, the lysates were centrifuged, and the supernatant was collected for protein quantification using the BCA protein assay. Protein quality was verified by electrophoresis of 20 µg protein on a 10% SDS-PAGE gel. Reduction and alkylation of proteins were achieved by treating the samples with 5 mM dithiothreitol at 60°C for 1 hour, followed by 10 mM iodoacetamide at room temperature for 30 minutes in the dark. The proteins were precipitated using ice-cold acetone and an incubation period of 12 hours at 4°C, after which the samples were centrifuged, and the resultant pellet was resuspended in 50 mM triethylammonium bicarbonate. This was followed by enzymatic digestion with trypsin (Promega, Madison, WI, USA) for 12 hours. The resulting peptides were purified using a Sep-Pak C 18 Plus Light Double Luer-Lock Cartridge (Waters, Milford, MA, USA). The digested peptides were acidified with 1% formic acid and centrifuged, and the supernatants were collected. A Sep-Pak column was activated using 100% acetonitrile, followed by 0.1% formic acid, and then acidified peptide samples were loaded onto the column, washed with 0.1% formic acid, and eluted with 40% acetonitrile in 0.1% formic acid. Following elution, the peptides were dried and resolubilized in 100 mM triethylammonium bicarbonate buffer (TEAB) and subsequently labeled with TMT10plex Isobaric Label Reagents and Kits (Thermo Fisher Scientific), following the manufacturer's instructions.
The labeled peptides were solubilized in 1 mL basic pH RPLC solvent A (7 mM TEAB, pH 8.5) and fractionated by basic pH reverse phase liquid chromatography (bRPLC) on an XBridge BEH C 18 Column (Waters), employing a progressively increasing gradient of bRPLC solvent B (7 mM TEAB, pH 8.5, 90% acetonitrile), utilizing an Agilent 1260 HPLC system (Agilent Technologies, Santa Clara, CA, USA). The flow rate for the mobile phase was set at 0.3 mL/min, and the eluted peptides were monitored by absorbance changes at 280 nm. The procedure was completed over a total duration of 90 minutes, yielding a collected volume of 27 mL. Subsequently, the 96 fractions were consolidated into 12 fractions and vacuum dried.
Lyophilized peptides were resuspended in 0.1% formic acid and analyzed using an Orbitrap Fusion Lumos Mass Spectrometer (Thermo Fisher Scientific) interfaced with an Easy-nLC 1200 nanoflow liquid chromatography system (Thermo Fisher Scientific). The peptides were applied to a precolumn (nanoViper; 100 µm × 20 mm, Thermo Fisher Scientific) at a flow rate of 3 µL/min for enrichment and subsequently separated on an analytical column (HPLC Column Acclaim RSLC 120 C18, 75 µm × 50 cm; Thermo Fisher Scientific) at a flow rate of 280 nL/min. The elution was performed using a step gradient of 8% to 22% solvent (0.1% formic acid in 95% acetonitrile) over 70 minutes, followed by an increase to 22% to 35% solvent for a duration of 70 to 103 minutes. The total acquisition time was set at 120 minutes. The mass spectrometer was operated in a data-dependent acquisition mode. Survey full-scan mass spectrometry (MS) (from m/z 350–1600) was acquired in the Orbitrap at a resolution of 120,000 at 200 m/z . The AGC target for MS1 was set at 4 × 10 5 and the ion filling time was set at 50 ms. The most intense ions with charge state ≥2 were isolated with isolation window 1.6 in a 3-second cycle and fragmented using higher-energy collisional dissociation (HCD) fragmentation with 34% normalized collision energy and detected at a mass resolution of 50,000 and an ion injection time of 100 ms.
For protein identification and quantification, the SEQUEST search algorithm was employed using Proteome Discoverer software against the Human RefSeq protein database. The search parameters included a maximum of two missed cleavages. Carbamidomethylation at cysteine and TMT 10-plex (+229.163) modification at the N-terminus of peptide and lysine were set as fixed modifications, while oxidation of methionine was a variable modification. For MS data, monoisotopic peptide mass tolerance was set to 10 ppm and MS/MS tolerance to 0.1 Da. A false discovery rate of 1% was set at the peptide-spectrum match level as well as at 1% at the protein level. Subsequent analyses were conducted using Perseus software to compute fold changes and P values through t -tests, with fold changes undergoing logarithmic transformation to the log 2 scale. The criteria for identifying DEPs included thresholds of |log 2 fold changes| (≥0.5) and P values (<0.05). A volcano plot, integrating log 2 fold changes and P values, was generated to depict the distribution of each protein, utilizing the ggplot2 package in R. Proteins upregulated in iFECD TCF4 ΔbHLH relative to iFECD were marked with red dots, whereas downregulated proteins were denoted with blue dots. Additionally, heatmap clustering was performed using the heatmap.2 function within the gplot package for R, with all protein expression levels normalized to z -scores and illustrated across a spectrum from +2 to −2. Red stripes represented relatively high expressions, and blue stripes indicated relatively low expressions.
Gene Ontology (GO) analysis was performed using the ClusterProfiler package (version 4.2.2) in R. Significantly enriched GO terms were determined with a P value threshold of <0.05. The top 12 GO terms, representing biological processes (BP), cellular components (CC), and molecular functions (MF), were selected and graphically visualized using the ggplot2 package (version 3.3.6) in R. For pathway-based enrichment analysis, Reactome and Kyoto Encyclopedia of Genes and Genomes (KEGG) , analyses were also conducted. KEGG pathway analysis was conducted with the ClusterProfiler package and illustrated using the ggplot2 package in R. Reactome pathway analysis was carried out using the ReactomePA (version 1.38.0) and ggplot2 packages. Significantly enriched pathways, identified with a P < 0.05, were visually presented, showcasing the top 12 pathways based on their significant gene ratio on the x-axis. P values were converted with “–log 10 ,” then displayed with colors ranging from blue to red using the scales package. For protein–protein interaction (PPI) networks, GeneMANIA ( http://genemania.org/ ), an accessible online tool, was employed.
Our RNA sequencing (RNA-seq) data for the corneal endothelium derived from patients with FECD and healthy subjects were obtained from the DDBJ database. Two other RNA-seq data sets available at the GEO repository were also downloaded. , Data preprocessing was conducted utilizing fastp for the removal of adapter bases and low-quality reads. The refined reads were then mapped to the reference genome via the STAR alignment tool, with gene expression quantification achieved through RSEM. , Differential gene expression analysis was performed employing the DESeq2 package in R, applying criteria for adjusted P values to compare gene expression in the corneal endothelium of patients with FECD against gene expression in healthy controls. The expression levels for specific genes of interest were visualized by constructing boxplots in R utilizing the ggplot2 package.
Cells were fixed with 4% paraformaldehyde for 10 minutes, permeabilized using 1% Triton X-100 (Nacalai Tesque), and subsequently blocked with 2% bovine serum albumin to prevent nonspecific binding. The samples were incubated overnight at 4°C with primary antibodies against fibronectin (dilution 1:1000; BD Biosciences, Franklin Lakes, NJ, USA). Alexa Fluor 488–conjugated goat anti-mouse antibodies (Life Technologies, Carlsbad, CA, USA) were used as secondary antibodies, applied at a dilution of 1:1000 and incubated at 37°C for 45 minutes. Aggresomes were identified using an aggresome-specific reagent (dilution 1:1000; Enzo Life Science, Farmingdale, NY, USA) at 37°C for 45 minutes. Nuclei were stained with DAPI (Vector Laboratories, Carlsbad, CA, USA). Fluorescence microscopy analysis was conducted using a DM 2500 microscope (Leica Microsystems, Wetzlar, Germany). Colocalization analysis was performed using the ImageJ software (version 1.54f; National Institutes of Health, Bethesda, MD, USA). Manders’s coefficients were calculated to quantify the degree of colocalization between aggresome and fibronectin signals.
The cells from iFECD, iFECD TCF4 −/− , and iFECD TCF4 ΔbHLH were rinsed with ice-cold PBS and lysed using ice-cold radioimmunoprecipitation assay buffer supplemented with phosphatase inhibitor cocktail 2 (MilliporeSigma, Burlington, MA, USA) and a protease inhibitor cocktail (Roche Applied Science, Penzberg, Germany). The lysates were centrifuged at 800 × g for 10 minutes, and the concentration of total proteins in the supernatants was determined utilizing the BCA Protein Assay Kit (Thermo Fisher Scientific). The proteins were then separated by SDS-PAGE and transferred onto PVDF membranes, which were then blocked with 3% nonfat dry milk for 1 hour at room temperature and incubated overnight at 4°C with primary antibodies against cleaved caspase-3 (1:1000; Cell Signaling Technology, Danvers, MA, USA), cleaved poly (ADP-ribose) polymerase (cleaved PARP) (1:1000; Cell Signaling Technology), glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (1:3000; Medical & Biological Laboratories Co., Ltd., Tokyo, Japan), TCF4 (1:500), Snail1 (1:1000; Cell Signaling Technology), ZEB1 (1:1000; Cell Signaling Technology), fibronectin (1:20,000; BD Biosciences), phosphorylated Smad3 (p-Smad3) (1:1000; Cell Signaling Technology), Smad2 (1:1000; Cell Signaling Technology), phosphorylated Smad2 (p-Smad2) (1:1000; Cell Signaling Technology), and Smad3 (1:1000; Cell Signaling Technology). Following primary antibody incubation, the blots were washed and incubated with horseradish peroxidase–conjugated secondary antibodies (1:5000; GE Healthcare, Chicago, IL, USA) and visualized using luminal-based enhanced chemiluminescence with the ECL Advanced Western Blotting Detection Kit (Nacalai Tesque). The relative density of immunoblot bands from Western blot analyses was quantified using ImageJ software.
For flow cytometry analysis, control and TGF-β2–treated cells were stained with DMEM containing Annexin V (Medical & Biological Laboratories Co., Ltd.) for 15 minutes and harvested using Accumax (Innovative Cell Technologies, San Diego, CA, USA). Flow cytometric analysis was performed using CellQuest Pro software (BD Biosciences) for data acquisition and analysis.
All statistical analyses were performed using R software. For comparisons between two groups, statistical significance was assessed using Student's t -test. For multiple group comparisons, Dunnett's multiple-comparisons test was applied. Statistical significance was defined as P < 0.05 for all analyses. Results are presented as mean ± SEM.
Knockout of the bHLH in TCF4 in an iFECD In this study, we employed an in vitro model of iFECD due to the limited availability of corneal endothelial cells obtainable from surgical specimens of patients with FECD. We first generated the TCF4 knockout iFECD for proteome analysis to evaluate the effect of TCF4 on other molecules at the protein level. Representative images obtained with phase-contrast microscopy showed that iFECD exhibited a polygonal and monolayer structure. The iFECD TCF4 ΔbHLH variant with a deletion in the bHLH domain that abrogates TCF4 ’s function as a transcription factor also exhibited a morphology similar to that of the control iFECD ( A). The PCR product size of the genomic DNA of the TCF4 gene was approximately 900 bp in iFECD and 700 bp in iFECD TCF4 ΔbHLH ( B), showing the successful deletion of the bHLH domain. Western blotting showed the successful suppression of TCF4-A (54 kDa) (NM_001243234.2) and TCF4-B (72 kDa) (NM_001083962.2) ( C). Quantitative analysis further demonstrated a significant reduction in TCF4-A and TCF4-B expression levels in iFECD TCF4 ΔbHLH compared to iFECD ( D). Sanger sequencing also confirmed the absence of the bHLH domain in the TCF4 region ( E). (Note that the upstream and downstream bases of the bHLH domain are indicated by red or blue lines, respectively.) Identification of DEPs DEPs between iFECD and iFECD TCF4 ΔbHLH were identified using mass spectrometry for quantitative whole-cell proteomics to elucidate the molecular changes induced by TCF4 functional deletion in corneal endothelial cells derived from patients with FECD. The volcano plot revealed a global overview of the protein expression distributions of iFECD compared to the iFECD TCF4 ΔbHLH ( A). Among a total of 6510 proteins detected, 88 DEPs were found, including 52 upregulated (indicated in red dots) and 36 downregulated proteins (in blue dots) with thresholds of |log 2 (fold change)| ≥ 0.5 and P < 0.05 ( A). A heatmap illustrated a hierarchical clustering of the iFECD and iFECD TCF4 ΔbHLH representing variations in the relative abundance of all detected proteins with row z -scores ranging from −2 (blue) to +2 (red). A heatmap showed a visually split hierarchical clustering into two groups consisting of iFECD and iFECD TCF4 ΔbHLH groups and the similarity within each group ( B). The top 30 upregulated and downregulated proteins in iFECD TCF4 ΔbHLH compared to iFECD are shown in and , respectively. The top three upregulated proteins in the iFECD TCF4 ΔbHLH were alpha-2A adrenergic receptor (ADRA2A), carbonic anhydrase 2 isoform 1 (CA2), and retinal dehydrogenase 1 (ALDH1A1) . The top three downregulated proteins were keratin, type I cytoskeletal 19 (KRT19); calponin-1 isoform 1 (CNN1); and contactin-associated protein 1 precursor (CNTNAP1) . Enrichment Analysis of DEPs GO enrichment analysis was carried out using the 88 DEPs associated with the knockout of TCF4 . The GO terms were subdivided into three categories: BP, CC, and MF. Response to oxidative stress, response to toxic substances, and cellular response to chemical stress were significantly enriched in BP. The apical part of the cell, collagen-containing ECM, and cell–cell junction were significantly enriched in CC. Actin binding, ECM structural constituent, and cadherin binding were significantly enriched in MF. Reactome pathway analysis indicated that DEPs were enriched in the metabolism of carbohydrates, ECM organization, transport of inorganic cations/anions and amino acids/oligopeptides, cell surface interactions at the vascular wall, and collagen formation ( A). KEGG pathway analysis demonstrated the enrichment of proteoglycans in cancer, sphingolipid metabolism, protein digestion and absorption, ECM–receptor interaction, and ferroptosis ( B). The proteins altered by the knockout of TCF4 were further analyzed by creating PPI networks using GeneMANIA. For upregulated proteins, the solute carrier (SLC) protein family strongly interacted in the network, indicating an enrichment of amino acid–associated functions ( A). For downregulated proteins, ECM-related functions were potentially involved in TCF4 , as extracellular structure organization and ECM organization were significantly enriched in the network ( B). Our enrichment analyses indicated the enrichment of multiple pathways related to ECM; therefore, we also investigated the expression level of the pathway-related mRNA corresponding to the DEPs using previously published RNA-seq data, including our own. , , In terms of DEPs related to ECM organization (GO:0030198), COL1A2 , COL8A1 , and SULF1 were downregulated, and LUM , ANTXR1 , CCN1 , and NPNT were upregulated. The mRNA expression levels evaluated by three RNA-seq data sets revealed distinctive patterns of ECM-related molecules in corneal endothelial cells from patients with FECD compared to controls. Three genes showed consistent upregulation across all data sets: ANTXR1 , SULF1 , and COL1A2 ( A–C). FLNB was upregulated in both Nakagawa et al. and Chu et al. but not in Nikitina et al. ( D), while CCN1 showed increased expression in Nikitina et al. and Chu et al. but not in Nakagawa et al. ( E). SDC1 exhibited opposite expression patterns between data sets: decreased expression in Nakagawa et al. and increased expression in Nikitina et al. with no significant changes in Chu et al. ( F). COL8A1 showed significant upregulation only in Nakagawa et al. ( G). In contrast, LUM and HAPLN1 showed no significant changes across all data sets ( H, I). These results suggest that these pathologic ECM molecules are at least partially regulated by TCF4 . Effect of TCF4 Deletion on TGF-β2–Mediated ECM Production and Apoptosis We previously reported that the TGF-β signaling pathway plays an important role in producing excessive ECM and subsequent unfolded protein response–mediated apoptosis , ; therefore, we evaluated the effect of TCF4 deletion using the FECD cell model. For these experiments, in addition to iFECD TCF4 ΔbHLH (featuring deletion of the bHLH domain in TCF4 ), we utilized iFECD TCF4 −/− (harboring a 20-base deletion in exon 9 of TCF4 ) to further corroborate the effects of TCF4 knockout. Phase-contrast images of iFECD, iFECD TCF4 −/− , and iFECD TCF4 ΔbHLH showed a monolayer sheetlike structure with polygonal cell morphology resembling an in vivo corneal endothelial monolayer ( A, left). Consistent with our previous report, the phase-contrast images showed that TGF-β2 induced cell death in iFECD. By contrast, no cell death was induced by TGF-β2 in iFECD TCF4 −/− and iFECD TCF4 ΔbHLH ( A, right). Sanger sequencing confirmed that 20 bases in exon 9 in TCF4 were deleted in iFECD TCF4 −/− (note that the 20 bases in exon 9 in TCF4 are indicated by red lines) ( B). The exon numbers refer to TCF4-B (NM_001083962.2). Western blotting showed that TGF-β2 induced the cleavage of caspase-3 and PARP in iFECD. Conversely, the TGF-β2–mediated cleavages of caspase-3 and PARP were reduced in iFECD TCF4 −/− and iFECD TCF4 ΔbHLH ( C). Flow cytometric analysis showed that TGF-β2 treatment increased the percentage of Annexin V–positive cells to 31.4% ± 2.0% in iFECD. The percentage of Annexin V–positive cells in TGF-β2–treated iFECD TCF4 −/− cells showed a trend toward reduction (19.8% ± 1.3%), although this difference did not reach statistical significance ( P = 5.28 × 10 −2 ). In contrast, TGF-β2–treated iFECD TCF4 ΔbHLH cells exhibited a significant decrease in Annexin V–positive cells (18.0% ± 1.6%, P = 3.02 × 10 −2 ) compared to TGF-β2–treated iFECD cells ( D). Representative flow cytometric dot plots illustrating the gating parameters for all experimental conditions are presented in . Western blotting confirmed the suppression of TCF4 in iFECD TCF4 −/− . In terms of molecules related to the EMT, Snail1 was upregulated in iFECD by TGF-β2, but this TGF-β2–mediated upregulation of Snail1 was suppressed in both iFECD TCF4 −/− and iFECD TCF4 ΔbHLH. ZEB1 was not altered by TGF-β2 in any of the cell lines. The expression level of fibronectin was increased by TGF-β2 in iFECD but not in either iFECD TCF4 −/− or iFECD TCF4 ΔbHLH ( E). Phosphorylation of Smad2 and Smad3 by TGF-β2 was observed in iFECD and iFECD TCF4 −/− , while it was suppressed in iFECD TCF4 ΔbHLH ( F). This differential response in Smad signaling suggests that the mechanism by which TCF4 deletion rescues cells from apoptosis might involve distinct pathways in the two mutant cell lines. Quantitative analysis of these Western blot results and statistical testing are shown in , , and . Immunofluorescent staining showed that TGF-β2 increased fibronectin expression in iFECD but caused a smaller increase in iFECD TCF4 −/− . Aggresome staining showed that TGF-β2 induced unfolded proteins that partially colocalized with fibronectin. By contrast, TGF-β2 did not induce unfolded proteins in iFECD TCF4 −/− ( A). Quantitative analysis of colocalization using Manders’s coefficient showed significantly higher coefficient in TGF-β2–treated iFECD (0.735 ± 0.040) compared to TGF-β2–treated iFECD TCF4 −/− (0.152 ± 0.014, P = 1.41 × 10 −3 ) ( B).
TCF4 in an iFECD In this study, we employed an in vitro model of iFECD due to the limited availability of corneal endothelial cells obtainable from surgical specimens of patients with FECD. We first generated the TCF4 knockout iFECD for proteome analysis to evaluate the effect of TCF4 on other molecules at the protein level. Representative images obtained with phase-contrast microscopy showed that iFECD exhibited a polygonal and monolayer structure. The iFECD TCF4 ΔbHLH variant with a deletion in the bHLH domain that abrogates TCF4 ’s function as a transcription factor also exhibited a morphology similar to that of the control iFECD ( A). The PCR product size of the genomic DNA of the TCF4 gene was approximately 900 bp in iFECD and 700 bp in iFECD TCF4 ΔbHLH ( B), showing the successful deletion of the bHLH domain. Western blotting showed the successful suppression of TCF4-A (54 kDa) (NM_001243234.2) and TCF4-B (72 kDa) (NM_001083962.2) ( C). Quantitative analysis further demonstrated a significant reduction in TCF4-A and TCF4-B expression levels in iFECD TCF4 ΔbHLH compared to iFECD ( D). Sanger sequencing also confirmed the absence of the bHLH domain in the TCF4 region ( E). (Note that the upstream and downstream bases of the bHLH domain are indicated by red or blue lines, respectively.)
DEPs between iFECD and iFECD TCF4 ΔbHLH were identified using mass spectrometry for quantitative whole-cell proteomics to elucidate the molecular changes induced by TCF4 functional deletion in corneal endothelial cells derived from patients with FECD. The volcano plot revealed a global overview of the protein expression distributions of iFECD compared to the iFECD TCF4 ΔbHLH ( A). Among a total of 6510 proteins detected, 88 DEPs were found, including 52 upregulated (indicated in red dots) and 36 downregulated proteins (in blue dots) with thresholds of |log 2 (fold change)| ≥ 0.5 and P < 0.05 ( A). A heatmap illustrated a hierarchical clustering of the iFECD and iFECD TCF4 ΔbHLH representing variations in the relative abundance of all detected proteins with row z -scores ranging from −2 (blue) to +2 (red). A heatmap showed a visually split hierarchical clustering into two groups consisting of iFECD and iFECD TCF4 ΔbHLH groups and the similarity within each group ( B). The top 30 upregulated and downregulated proteins in iFECD TCF4 ΔbHLH compared to iFECD are shown in and , respectively. The top three upregulated proteins in the iFECD TCF4 ΔbHLH were alpha-2A adrenergic receptor (ADRA2A), carbonic anhydrase 2 isoform 1 (CA2), and retinal dehydrogenase 1 (ALDH1A1) . The top three downregulated proteins were keratin, type I cytoskeletal 19 (KRT19); calponin-1 isoform 1 (CNN1); and contactin-associated protein 1 precursor (CNTNAP1) .
GO enrichment analysis was carried out using the 88 DEPs associated with the knockout of TCF4 . The GO terms were subdivided into three categories: BP, CC, and MF. Response to oxidative stress, response to toxic substances, and cellular response to chemical stress were significantly enriched in BP. The apical part of the cell, collagen-containing ECM, and cell–cell junction were significantly enriched in CC. Actin binding, ECM structural constituent, and cadherin binding were significantly enriched in MF. Reactome pathway analysis indicated that DEPs were enriched in the metabolism of carbohydrates, ECM organization, transport of inorganic cations/anions and amino acids/oligopeptides, cell surface interactions at the vascular wall, and collagen formation ( A). KEGG pathway analysis demonstrated the enrichment of proteoglycans in cancer, sphingolipid metabolism, protein digestion and absorption, ECM–receptor interaction, and ferroptosis ( B). The proteins altered by the knockout of TCF4 were further analyzed by creating PPI networks using GeneMANIA. For upregulated proteins, the solute carrier (SLC) protein family strongly interacted in the network, indicating an enrichment of amino acid–associated functions ( A). For downregulated proteins, ECM-related functions were potentially involved in TCF4 , as extracellular structure organization and ECM organization were significantly enriched in the network ( B). Our enrichment analyses indicated the enrichment of multiple pathways related to ECM; therefore, we also investigated the expression level of the pathway-related mRNA corresponding to the DEPs using previously published RNA-seq data, including our own. , , In terms of DEPs related to ECM organization (GO:0030198), COL1A2 , COL8A1 , and SULF1 were downregulated, and LUM , ANTXR1 , CCN1 , and NPNT were upregulated. The mRNA expression levels evaluated by three RNA-seq data sets revealed distinctive patterns of ECM-related molecules in corneal endothelial cells from patients with FECD compared to controls. Three genes showed consistent upregulation across all data sets: ANTXR1 , SULF1 , and COL1A2 ( A–C). FLNB was upregulated in both Nakagawa et al. and Chu et al. but not in Nikitina et al. ( D), while CCN1 showed increased expression in Nikitina et al. and Chu et al. but not in Nakagawa et al. ( E). SDC1 exhibited opposite expression patterns between data sets: decreased expression in Nakagawa et al. and increased expression in Nikitina et al. with no significant changes in Chu et al. ( F). COL8A1 showed significant upregulation only in Nakagawa et al. ( G). In contrast, LUM and HAPLN1 showed no significant changes across all data sets ( H, I). These results suggest that these pathologic ECM molecules are at least partially regulated by TCF4 .
TCF4 Deletion on TGF-β2–Mediated ECM Production and Apoptosis We previously reported that the TGF-β signaling pathway plays an important role in producing excessive ECM and subsequent unfolded protein response–mediated apoptosis , ; therefore, we evaluated the effect of TCF4 deletion using the FECD cell model. For these experiments, in addition to iFECD TCF4 ΔbHLH (featuring deletion of the bHLH domain in TCF4 ), we utilized iFECD TCF4 −/− (harboring a 20-base deletion in exon 9 of TCF4 ) to further corroborate the effects of TCF4 knockout. Phase-contrast images of iFECD, iFECD TCF4 −/− , and iFECD TCF4 ΔbHLH showed a monolayer sheetlike structure with polygonal cell morphology resembling an in vivo corneal endothelial monolayer ( A, left). Consistent with our previous report, the phase-contrast images showed that TGF-β2 induced cell death in iFECD. By contrast, no cell death was induced by TGF-β2 in iFECD TCF4 −/− and iFECD TCF4 ΔbHLH ( A, right). Sanger sequencing confirmed that 20 bases in exon 9 in TCF4 were deleted in iFECD TCF4 −/− (note that the 20 bases in exon 9 in TCF4 are indicated by red lines) ( B). The exon numbers refer to TCF4-B (NM_001083962.2). Western blotting showed that TGF-β2 induced the cleavage of caspase-3 and PARP in iFECD. Conversely, the TGF-β2–mediated cleavages of caspase-3 and PARP were reduced in iFECD TCF4 −/− and iFECD TCF4 ΔbHLH ( C). Flow cytometric analysis showed that TGF-β2 treatment increased the percentage of Annexin V–positive cells to 31.4% ± 2.0% in iFECD. The percentage of Annexin V–positive cells in TGF-β2–treated iFECD TCF4 −/− cells showed a trend toward reduction (19.8% ± 1.3%), although this difference did not reach statistical significance ( P = 5.28 × 10 −2 ). In contrast, TGF-β2–treated iFECD TCF4 ΔbHLH cells exhibited a significant decrease in Annexin V–positive cells (18.0% ± 1.6%, P = 3.02 × 10 −2 ) compared to TGF-β2–treated iFECD cells ( D). Representative flow cytometric dot plots illustrating the gating parameters for all experimental conditions are presented in . Western blotting confirmed the suppression of TCF4 in iFECD TCF4 −/− . In terms of molecules related to the EMT, Snail1 was upregulated in iFECD by TGF-β2, but this TGF-β2–mediated upregulation of Snail1 was suppressed in both iFECD TCF4 −/− and iFECD TCF4 ΔbHLH. ZEB1 was not altered by TGF-β2 in any of the cell lines. The expression level of fibronectin was increased by TGF-β2 in iFECD but not in either iFECD TCF4 −/− or iFECD TCF4 ΔbHLH ( E). Phosphorylation of Smad2 and Smad3 by TGF-β2 was observed in iFECD and iFECD TCF4 −/− , while it was suppressed in iFECD TCF4 ΔbHLH ( F). This differential response in Smad signaling suggests that the mechanism by which TCF4 deletion rescues cells from apoptosis might involve distinct pathways in the two mutant cell lines. Quantitative analysis of these Western blot results and statistical testing are shown in , , and . Immunofluorescent staining showed that TGF-β2 increased fibronectin expression in iFECD but caused a smaller increase in iFECD TCF4 −/− . Aggresome staining showed that TGF-β2 induced unfolded proteins that partially colocalized with fibronectin. By contrast, TGF-β2 did not induce unfolded proteins in iFECD TCF4 −/− ( A). Quantitative analysis of colocalization using Manders’s coefficient showed significantly higher coefficient in TGF-β2–treated iFECD (0.735 ± 0.040) compared to TGF-β2–treated iFECD TCF4 −/− (0.152 ± 0.014, P = 1.41 × 10 −3 ) ( B).
The aim of this study was to elucidate the role of TCF4 in FECD pathophysiology by conducting a proteomic analysis of the FECD cell model after CRISPR/Cas9 knockout of TCF4 . This manipulation enabled the identification of DEPs and pathways for understanding the molecular mechanisms underlying FECD. Liquid chromatography–MS analysis followed by pathway enrichment analysis identified significant molecular pathways potentially involved in the pathogenesis of FECD. TCF4 , a bHLH family member, is located on chromosome 18q21.2 (OMIM #602272; ENSG00000196628). TCF4 regulates gene expression by binding to E-box DNA sequences, thereby influencing a broad spectrum of developmental and cellular processes. However, the role of TCF4 varies depending on the cell type and disease. Numerous studies have linked TCF4 to various neurodevelopmental disorders, with common genetic variants now associated with increased susceptibility to schizophrenia – and primary sclerosing cholangitis. , Rare mutations in TCF4 are causes of Pitt–Hopkins syndrome, a condition characterized by intellectual disability and developmental delays. – The critical role of TCF4 in neurodevelopment is substantiated by knockout mouse models, which exhibit significant neurodevelopmental defects and abnormal neuronal migration. These findings underscore the importance of TCF4 in normal brain development and function. In the immune system, TCF4 is essential for the development of plasmacytoid dendritic cells, which play a crucial role in antiviral responses. , TCF4 is also involved in the EMT, a process vital for embryonic development, tissue repair, and cancer metastasis in epithelial cells of the kidney and neuroblastoma cells. – In FECD, the discovery that a major portion of patients with FECD harbor a trinucleotide repeat expansion in TCF4 has led to significant research efforts directed toward understanding how TCF4 contributes to the pathogenesis of FECD. Various mechanisms have been proposed to elucidate how the repeat expansion in TCF4 impacts cellular functions in FECD. A primary hypothesis is that TCF4 is dysregulated because the repeat expansion alters expression levels and splicing of TCF4 transcripts. , – , This disruption can lead to aberrant splicing and dysregulated expression of specific TCF4 isoforms, thereby disrupting normal cellular functions. , , Another proposed mechanism is RNA-mediated toxicity, as the expanded repeat RNA transcripts sequester RNA-binding proteins, such as muscleblind-like (MBNL) proteins, leading to widespread splicing dysregulation. This process mirrors the pathogenic mechanism seen in myotonic dystrophy, another trinucleotide repeat disorder. – In FECD, the sequestration of MBNL proteins by expanded repeats in TCF4 RNA results in abnormal splicing of multiple genes, contributing to cellular dysfunction. , , , Repeat-associated non-AUG translation has also been identified as a potential pathogenic mechanism. This process produces toxic polypeptides from expanded-repeat RNA without a traditional start codon. These peptides can aggregate, disrupting cellular homeostasis and inducing cell death. However, despite these significant advancements in understanding the role of TCF4 in FECD, many aspects of the disease mechanism remain elusive, including the exact role of TCF4 in the corneal endothelium. Previous studies mainly studied the transcriptome by analyzing samples obtained from FECD with repeat expansion in TCF4 , FECD without repeat expansion, and non-FECD subjects. The limited availability of clinical samples of corneal endothelium has hampered a comprehensive proteome analysis. However, proteomics is an indispensable addition to transcriptome analysis because it captures the dynamic and functional aspects of proteins that are not reflected at the RNA level. – The current pathway analyses at the protein level revealed that multiple ECM-related pathways are associated with TCF4 . The guttae induced by excessive deposition of ECM components , are diagnostic FECD features, and they are responsible for reduced visual function due to light scattering. , Our proteome analyses presented here have added evidence that TCF4 plays a pivotal role in the phenotypic features of FECD. In FECD, corneal endothelial cells lose their epithelial cell phenotype and transform into a mesenchymal phenotype associated with the production of multiple ECM components; some researchers have proposed that this process is the EMT or endothelial–mesenchymal transition. , , The EMT is a crucial process in development, wound healing, and pathologic conditions like fibrosis and cancer metastasis. Our current data support an involvement of TCF4 in the EMT in corneal endothelial cells, although further study using multiple EMT markers is necessary. We previously reported that excessive production of ECM proteins, including fibronectin and collagen type 1, results in the formation of unfolded proteins in the corneal endothelium, as observed in samples obtained from patients with FECD. , Our previous in vitro study using the FECD cell model showed that TGF-β, which plays a pivotal role in EMT by activating intracellular signaling pathways, such as the Smad and non-Smad pathways, increases the production of ECM, resulting in apoptosis mediated by the unfolded protein response. , In the current study, the deletion of TCF4 suppressed this formation of unfolded protein and counteracted TGF-β–mediated apoptosis of the FECD cell model. These results suggest that TCF4 induces the EMT and causes excessive production of pathologic ECM molecules, which eventually cause endoplasmic reticulum stress–induced apoptosis. The remaining question is how TCF4 induces pathologic processes only in patients with FECD but not in healthy subjects. We recently analyzed three RNA-seq data sets for corneal endothelial cells derived from non-FECD and FECD subjects. We found that one isoform of TCF4 , among at least 93 isoforms, was upregulated in the corneal endothelium of patients with FECD harboring repeat expansion in TCF4 . The discovery of this isoform, TCF4-277 (ENST00000636400.2), indicated that a dysregulated isoform of TCF4 associated with repeat expansion potentially induces the pathologic process of FECD. , Our current results indicate that deletion of TCF4 in the FECD cell model suppresses the disease phenotype, providing further support for the concept that dysregulated TCF4 plays an important role in pathophysiology. One limitation of the present study is the lack of FECD cells without repeat expansion; therefore, the precise role of TCF4 in FECD without expansion is still unclear. Similar analyses using corneal endothelial cells derived from multiple patients with FECD are also necessary, as the severity of FECD varies widely depending on the individual. In summary, our present findings highlight the critical role of TCF4 in the pathophysiology of FECD, particularly implicating ECM-related pathways and TGF-β–mediated cell death. Further investigation of the role of dysregulated TCF4 might reveal the precise details of FECD pathophysiology and provide potential therapy targeting TCF4 or associated pathways.
Supplement 1 Supplement 2 Supplement 3 Supplement 4 Supplement 5
|
Awareness of the impact of sex and gender in the disease risk and outcomes in hematology and medical oncology—a survey of Swiss clinicians | 477375da-6703-489c-8f35-31afb17b0337 | 10849995 | Internal Medicine[mh] | INTRODUCTION Individuals differ on many levels, including socially and biologically. Despite the distinction between sex and gender, these terms are often used interchangeably in research and are not sufficiently considered in modern medical practice. Sex refers to biological features such as chromosomes, physiological processes, and organs (including and beyond reproductive ability). Gender, on the other hand, describes the characteristics of our socially constructed roles, behaviors, and identity. Most treatment decisions are based on decades‐long research using predominately male cells, animals, and individuals, which is problematic given the growing body of data indicating sex differences in various diseases (especially pertaining to prevalence, symptoms, diagnosis, and prognosis). Oncology and hematology are disciplines that currently incorporate the complexities of carcinogenesis and molecular genetic differences in tumor biology in preclinical and clinical research. Until recently, both fields were tumor type‐centered and aimed to identify common characteristics among patients to determine the best treatment protocol for each patient. The development of precision medicine and the introduction of novel diagnostic tools have allowed a better understanding of genomic subgroups, immunological interactions, and biomarkers in different tumor types. , These critical clinical advances have replaced the “one size fits all” tumor type‐centered treatments and general‐purpose cytotoxic drugs with targeted approaches and biomarker‐driven treatments (e.g., tyrosine kinase inhibitors, immunotherapies, adaptive cell therapy, and personalized vaccines). Moreover, factors like age, frailty, organ function, and concomitant drugs are often considered to further personalize treatment decisions. Despite these advances, there is limited understanding regarding the differences between male and female biology and differing pharmacokinetic responses to cancer drugs. Most knowledge about tumor biology and anticancer drugs is still based on the male physiology in cells, animals, and humans. Historically, females have been underrepresented and underreported in biomedical research and clinical trials due to the potential effect of cyclic hormonal changes on results, fertility risk, and additional pregnancy‐related considerations. Yet, there are notable and significant sex differences that should be considered when tailoring anticancer therapies. As an example, a male body mass is comprised of 80% of lean, metabolically active muscle while that constitutes only 65% of female body mass. The higher fraction of adipose tissue in the female body may lead to higher rates of toxicity, which may require dose reductions during treatment and could also lead to worse health outcomes compared to male patients. In spite of the notable physiological differences between the sexes, in current clinical practice male and female patients receive the same anticancer treatment regimens and medication dosages. In clinical studies, there is a distinct lack of reporting about sex differences pertaining to tumor biology, mutational markers, as well as treatment response and adverse effects. , Addressing this significant knowledge gap in both disciplines could improve sex‐specific dosing, treatment efficacy, toxicity, and the OS in both sexes. To address this knowledge gap and raise awareness regarding the need for reporting the sex and gender differences in non‐sex‐related malignancies, we developed a survey for Swiss oncologists and hematologists to assess their current knowledge about the impact of sex and gender in disease risk and outcomes, specifically in clinical practice. In addition to raising awareness about this issue, we aim to motivate clinicians and researchers to be more critical about sex and gender differences in education and daily practice, and to also consider policy changes in basic research, clinical trial conduct and reporting.
METHODS Our cross‐sectional online study was conducted among hematologists and oncologists in Switzerland. To identify potential participants, we used web searches to generate a list of clinicians in hematological and oncological departments in hospitals and medical practices in Switzerland, which identified 56 institutions within Switzerland and 767 eligible clinicians (245 hematologists and 522 oncologists). To recruit potential participants, we emailed 767 identified clinicians in September 2022 with a description of our cross‐sectional study and a link to participate via SurveyMonkey® (an online platform). Two weeks after this initial email, we sent a follow‐up email to all individuals in the email listing, except to those who already participated or those who opted out of the email listing. In November 2022, we also handed out a flyer at the Swiss Oncologists and Hematologist Congress (SOHC) with information about the study and a QR code to participate in the survey. Our online survey was available over the course of 10 weeks (from September 19 to November 26, 2022), and was closed at the end of the study period. 2.1 Survey instrument The survey collected data including participant demographics and career‐related questions such as working region within Switzerland and clinical fields of work. Using published literature in the fields of hematology and oncology regarding established sex and gender differences, we also developed questions related to to clinical knowledge in the areas of hematology, oncology, experimental research, palliative care, quality of life in older populations, and participant perceptions regarding the general importance of sex and gender in cancer‐related treatment options. As these questions were developed specifically for our study, we pilot‐tested them in a small group of clinicians from other medical fields to determine whether participants could understand the questions and to avoid creating a knowledge bias in our target study population. We initially developed the survey questions in English. Two translators then translated them into German and French using forward translation. We then back‐translated the survey questions to English using a free online translating tool (DeepL®) to assess the accuracy of the German and French translations. For the questions developed to assess participant awareness of sex and gender in disease risk and outcomes, we used five‐point Likert‐type responses to measure the participant's level of agreement. After a participant answered a specific question, the online survey would then provide the correct response with the corresponding literature citation. 2.2 Statistical analyses Given our study objectives and small sample size, our analysis focused on descriptive statistics. We used the medians and interquartile ranges to describe the distribution of skewed continuous variables, and reported proportions for categorical variables. We reported descriptive statistics for the sample overall and sex‐stratified.
Survey instrument The survey collected data including participant demographics and career‐related questions such as working region within Switzerland and clinical fields of work. Using published literature in the fields of hematology and oncology regarding established sex and gender differences, we also developed questions related to to clinical knowledge in the areas of hematology, oncology, experimental research, palliative care, quality of life in older populations, and participant perceptions regarding the general importance of sex and gender in cancer‐related treatment options. As these questions were developed specifically for our study, we pilot‐tested them in a small group of clinicians from other medical fields to determine whether participants could understand the questions and to avoid creating a knowledge bias in our target study population. We initially developed the survey questions in English. Two translators then translated them into German and French using forward translation. We then back‐translated the survey questions to English using a free online translating tool (DeepL®) to assess the accuracy of the German and French translations. For the questions developed to assess participant awareness of sex and gender in disease risk and outcomes, we used five‐point Likert‐type responses to measure the participant's level of agreement. After a participant answered a specific question, the online survey would then provide the correct response with the corresponding literature citation.
Statistical analyses Given our study objectives and small sample size, our analysis focused on descriptive statistics. We used the medians and interquartile ranges to describe the distribution of skewed continuous variables, and reported proportions for categorical variables. We reported descriptive statistics for the sample overall and sex‐stratified.
RESULTS A total of 150 clinicians completed the survey, which corresponded to a 20% response rate. Most participants (82%) worked in the German‐speaking region of Switzerland, and 99% worked in a hospital setting (Table ). Approximately half of the participants were biologically female (53%), and 76% were aged between 31 and 50 years. All of the participants in our sample reported concordance with their sex assigned at birth and their gender expression. To be consistent, we will refer to the participants from this study as female or male. While most participants indicated knowing the difference between the sex and gender, biological sex was considered twice as relevant as sociocultural gender role and responses seemed comparable between male and female participants. In some of our sex‐stratified analyses, we found that resident or attending physicians comprised 75% of female and 37% of male participants, respectively (Table ). Among the male participants, 60% were chief physicians or heads of departments, compared to only 23% among female participants. Half (54%) of the participants in our study agreed that sex and gender should be incorporated in personalized medicine to be accurate, although only 23% strongly agreed with this statement (Figure ; Table ). Over half (59%) of our sample knew about the predominate use of male cells and animals in experimental research, although one‐third (34% male vs. 32% female) reported that they were unaware of these disparities. In regards to sex disparities in antitumor treatment, 54% agreed that women are more likely to develop adverse effects from anticancer treatments (Figure ; Table ). 11% and 15% of male and female participants, respectively, were unaware of the greater burden of adverse events (AEs) among women. Nearly 40% of participants (44% male vs. 35% female) were unaware of sex differences in the OS in melanoma. Most participants (64% male vs. 77% female) agreed that muscle mass and adipose tissue can affect treatment response. Over half of our participants (63% female and 59% male participants) disagreed that nonreproductive carcinomas are independent of sex hormones. However, 15% and 10% of male and female participants, respectively, agreed with this statement. Approximately one‐third of the sample knew that rituximab (i.e., an anti‐CD20 antibody) showed reduced plasma levels among male compared to female patients (Figure ; Table ). However, 17% of our male participants and 10% of our female participants disagreed with this statement. Regarding stem cell transplantations, 75% and 60% of female and male participants, respectively agreed that survival and toxicity is affected by the biological sex of the recipient or donor. More female participants (82% compared to 66% male participants) agreed that male patients in palliative settings discuss impending death more than female patients (Figure ; Table ). Nearly half (48%) of our sample disagreed that older women had poorer quality of life (43% male vs. 51% female), and 14% of participants responded that they were unsure. At the end of the survey, 42% of participants (37% male vs. 46% female) agreed that the information provided in our survey changed their opinions about the relevance of sex and gender in everyday clinical practice. Furthermore, most participants indicated that they would like to see this topic integrated into continuing education (74%) and research (83%). Among female participants, 85% (compared to 61% of male participants) indicated that they wanted sex and gender integrated into continuing education, and 90% (compared to 69% of male participants) of wanted these topics integrated in research.
DISCUSSION The results from our cross‐sectional online survey indicate that there is room for improved awareness and education regarding sex and gender in cancer research and patient care among Swiss hematologists and oncologists. While a notable proportion of clinicians responded incorrectly to certain statements or indicated that they were unsure of the correct response, there seems to be an important opportunity to raise awareness about sex and gender disparities given that nearly half of the sample indicated that the information from our survey changed their opinions about the relevance of sex and gender in daily clinical practice. Most participants were aware of the difference between the two terms and considered sex and gender as part of “personalized medicine.” However, currently personalized or precision medicine aims to identify molecular and biological characteristics in most cases to customize patient‐specific targeted treatments, and sex and gender are not typically considered. The difficulty with assessing sex and gender begins at the preclinical research level. We recently reported in an international survey among academic cancer researchers that half of the 1247 researchers did not know the sex of the cell lines used in their research, even though data suggest that the sex of cell lines can affect the results of in vitro experiments. , This was also reflected in the responses from our current study, given that nearly 40% of male participants did not know about this bias. As a further example, Nunes and colleagues showed that higher levels of toxicity were inflicted upon male‐derived cells in an anticancer high throughput screening, which presented a sex‐related difference in cell sensitivity to 79 out of 81 antineoplastic agents. Similarly, as nearly two‐thirds of the participants in our study recognized, there is increasing evidence that sex chromosomes and hormones play an important role in development of various nonsex‐dependent cancers (such as melanoma, lung, bladder, and liver cancer). , As recognized by over half of our participants, women are more likely to develop AEs from anticancer treatment. Indeed, women often have higher blood drug concentrations and longer elimination times than men with the same drug dose, leading to a higher risk for adverse drug reactions across all drug classes and higher hospitalization rates among women. , For example, The SEXIE‐R‐CHOP‐14 trial showed that elderly men treated for diffuse large B‐cell lymphoma (DLBCL) had lower serum levels of the anti‐CD20‐antibody rituximab. By increasing the usual dosage for elderly men from 350 to 500 mg/m 2 , the progression‐free survival and the OS improved compared to previous trials. Although this data has not been incorporated into newer trial designs, , , on a positive note, the 2023 National Comprehensive Cancer Network (NCCN) guidelines for the treatment of DLBCL now recommend higher doses in men over 60 years of age receiving the R‐CHOP21 regimen. Similarly, women have a reduced clearance of 5‐Fluorouracil (5‐FU), a drug commonly used in treating gastrointestinal cancers, leading to higher exposure and subsequently higher toxicity which is mainly hematological. , In a systematic review of AEs in clinical trials, Unger et al. reported that women had a 34% increased risk of severe toxicity for all treatment types (cytotoxic drugs, immunotherapies, and targeted therapies). To prevent infections in cases of neutropenia, hematopoietic growth factors (G‐CSF) can be applied to stimulate the maturation and mobilization of granulocytes in the bone marrow. However, the current NCCN guidelines for G‐CSF administration do not acknowledge differences between men and women in the prophylactic and therapeutic setting. Similarly, the Multinational Association of Supportive Care in Cancer (MASCC) febrile neutropenia risk index does not incorporate sex and gender in risk stratification, leading to a lack of inclusion in the European Society for Medical Oncology (ESMO) guidelines for management of febrile neutropenia. Increased risk for gastrointestinal AEs, such as nausea and vomiting, have also been reported among women receiving anticancer treatments. , However, this is neither mentioned in the current American Society for Clinical Oncology (ASCO) Antiemetics Guidelines, nor are there any sex‐specific recommendations for preventing and treating treatment‐related nausea and emesis. The lack of sex‐adjusted data and guidelines creates a vacuum in clinical practice, which in turn makes anticancer treatment options imprecise in daily practice. The effects on the immune system and immune responses are becoming more apparent due to the implementation of immune checkpoint inhibitors in anticancer treatment. For instance, in the KEYNOTE‐024 trial, male non small‐celllung cancer (NSCLC) patients derived a significant benefit from the immune checkpoint inhibitor pembrolizumab compared to standard chemotherapy (hazard ratio [HR] for progression or death = 0.39, 95% Confidence Interval [CI]: 0.26–0.58) while this benefit was substantially lower among female patients (HR = 0.75, 95% CI: 0.46–1.21) in the sub‐group analysis. , In line with this, lower benefit was reported for women with advanced melanoma receiving combined immune checkpoint inhibitors. Given that the clinical trials are not designed or powered to investigate potential sex differences in treatment effects, no meaningful conclusions can be drawn. Some meta‐analyses comparing different immune checkpoints in various tumor types have suggested sex differences in the efficacy of these therapies, while others did not find any significant differences. , , Pooled analyses of individual patient data from clinical trials could help to address this question until prospective trials are designed with adequate power to show sex differences. Our study had several limitations. In our questionnaire, we mainly focused on binary biological sex rather than nonbinary gender given that we developed our survey questions from previously published literature emphasizing biological sex. The lack of information on gender in published studies did not allow for incorporating more gender‐related questions, which is a self‐perpetuating problem and a limitation of our study. While many treatment regimens, past and present, are applied intravenously, the current trend towards orally available anticancer treatments might make behavioral differences a critical consideration. Among patients with cardiovascular diseases, such as hypertension, diabetes, and hyperlipidemia, men have been shown to have higher adherence rates than women. , This could also apply to our patient population, which in turn requires us to consider gender differences more in daily clinical practice. Another imitation of our survey was the convenience sampling used to recruit our study sample. Given that many hospitals and medical practices did not include their residents on their web pages, we may have sampled more experienced physicians which might cause a bias in participant knowledge. We certainly observed demographic trends given that over half of our female participants were under the age of 40 years and in the beginning of their careers while over half of the male participants were over 40 years old and had higher positions within the hospital settings. The same difficulty occurred when searching for medical institutions in Switzerland's French and Italian regions, which may have resulted in oversampling of German‐speaking clinicians. Our study is also prone to selection bias, given our low response rate, as well as nonresponse bias. Clinicians interested in sex and gender differences might have been more prone to participating in our survey. Using the survey as an educational instrument and providing the answers might have created a bias when answering the subsequent question. Taken together, most participants were interested in the topic of sex and gender, had a basic knowledge of theoretical sex differences, but did not have solid information to apply in clinical practice. As the above stated literature and our survey results show, there is an increasing amount of published data concerning the differences between the sexes although it still needs to be implemented in daily clinical practice. More female participants need to be included in research and sex‐adjusted subgroup analysis must be reported. Notably, more education and studies concerning sex‐ and gender differences are necessary in the medical field. We are convinced that increased awareness and training on sex and gender differences in hemato‐oncology are required to ultimately increase consideration of these two critical factors in clinical trial design and treatment decisions and improve the outcome of both male and female patients.
All authors had full access to the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Conceptualization, Berna C. Özdemir; Methodology Xenia Darphin, Jeanne Moor, Berna C. Özdemir; Investigation, Darphin Formal Analysis Darphin, Anke Richters. Writing–Original Draft, Darphin, CED, Berna C. Özdemir Writing–Review & Editing, Darphin, Jeanne Moor, CED, Anke Richters, Berna C. Özdemir Visualization, Anke Richters Supervision Berna C. Özdemir.
The authors do not have any competing interests to declare in relation to this work.
A Clarification of Responsibility with the Cantonal Ethics Committee Bern (BASEC Nr. Req‐2022‐00436) indicated that our study did not require approval by an ethics committee. We assumed that participants consented to participate in our study after reading a description of our study and voluntarily clicking on the survey link to participate. This work was not supporting by any funding.
|
Assessment of educational technology in lactation physiology by health students | aa964db7-3370-42ab-81cc-92f3e34ecb03 | 11135912 | Physiology[mh] | Significant learning transforms knowledge, enabling students to grasp concepts and act autonomously throughout their professional journey . This understanding implies the implementation of pedagogical practices that promote learning, especially when intending to use technological resources that contribute to the professional training of health students . Research involving students from Portugal showed that the applied pedagogical model considered using technological resources, such as audiovisuals, due to their positive effects on the self-learning of postgraduate students . This type of technology holds educational potential because of its visual and filmic language, aiding in both the acquisition of scientific knowledge and the autonomy of students in their cognitive constructs. Thus, audiovisual technologies can serve as instruments associated with pedagogical practices, mediating the teaching-learning process . Among the scientific content to be mastered by health students is lactation physiology, which focuses on developing evidence-based breastfeeding (BF) management , as well as counseling women to achieve their breastfeeding goals and meet sustainable development objectives . However, lactation physiology includes a set of complex and abstract contents , related to hormones and their role in producing breast milk . Consequently, educational technologies can be effective mediating tools for teaching and learning, such as videos on BF content , motivation , and self-efficacy in breastfeeding. Investing in technologies addressing this topic highlights the importance of using technologies to mediate BF promotion content. From this perspective, methodological studies have been undertaken to create and validate with experts a video clip that promotes the learning of lactation physiology, grounded in the Knowledge Translation Model . Therefore, considering the validated tool, the research problem lies in determining whether the educational technology is appropriately tailored for a specific target audience, such as undergraduate health students in the current study. This aligns with the contemporary challenge, identified in a scope review, of applying new knowledge to reduce the gap between evidence and clinical practice. This challenge involves engaging end-users and stakeholders, and considering the context-specificities where the tool will be used . The hypothesis is that the video clip is suitable for pedagogical use with undergraduate health students, and that it is essential to consider the barriers in the local context for its continued use. To support this hypothesis, evaluating the video clip is necessary, and there are various methods for evaluating educational technologies (ET) that can be developed across different knowledge areas . These evaluations should consider the social dimension, requiring an assessment of the target audience’s understanding , and should ideally be conducted in a manner that promotes audience engagement, enhancing the tool’s continued use . To assess the suitability, facilitators, and barriers to using a video clip in teaching lactation physiology to undergraduate health students. Ethical Aspects This study is part of the thesis entitled “Evaluation of a Video Clip for Learning Lactation Physiology by Undergraduate Health Students” , approved by the Human Research Ethics Committee of the Federal University of Santa Maria, in accordance with Resolution No. 466/12 of the Ministry of Health . Study Design, Location, and Period A cross-sectional study was conducted with online data collection, using tools provided through the Student Portal, with assistance from the Data Processing Center (CPD) of the higher education institution where the study was developed, located in Southern Brazil. Data collection took place between May and September 2021. Sample, Inclusion, and Exclusion Criteria There were 2,475 students enrolled in the eight undergraduate health courses at the institution: Nursing, Pharmacy, Physiotherapy, Speech Therapy, Medicine, Nutrition, Dentistry, and Occupational Therapy. It was determined that for the technology to be classified as adequate, 55% of the sample needed to rate it as good . Based on the mentioned population, a 10 percentage point margin of error, and a 95% confidence level, at least 92 participants were required in the sample. This calculation was performed using the WINPEPI 11.65 program . Inclusion criteria considered were undergraduate health students from a public higher education institution. All health courses were included due to the multi-professional nature of breastfeeding support, aligning with global policies for the promotion, protection, and support of breastfeeding, which necessitate training on the topic. No criteria were established regarding academic aspects and performance. The final sample consisted of 88 students, without course stratification. Study Protocol This study is part of a broader Knowledge Translation project, based on a model developed in Canada, known as Knowledge Translation . This project type proposes the application of evidence in various care practice settings and comprises two cycles: a creation cycle, where evidence synthesis is developed and technologies can be created, and an application cycle, which encompasses six phases: adapting knowledge to the local context; evaluating barriers and facilitators for knowledge use; selecting, tailoring, and implementing interventions; monitoring knowledge use; evaluating the impact; and sustaining knowledge use . In the current study, where the video clip was evaluated by students, one phase of the application cycle of the model was addressed: identifying barriers and facilitators to knowledge use. Notably, the video clip was validated by experts in two preceding methodological studies in the creation cycle of the knowledge translation model: the first to create and validate musical content and the second to create and validate imagery content . Its creation was a commitment to translating the complex and abstract knowledge of lactation physiology, aiming to introduce this content and mediate the target audience’s learning to complement the actions of promoting and supporting breastfeeding. The video clip has a duration of 2 minutes and 34 seconds. As a product of the knowledge translation project, the video clip was developed under the researchers’ guidance by professionals from various fields working in the Educational Technology Coordination (CTE) and the Music Department, located at the institution affiliated with the researchers. The final product, titled “Lactashow: The Lactation Cycle,” was registered and is freely accessible at: https://www.youtube.com/watch?v=dhiUfNXu7AE . To meet the study’s objective of assessing the video clip’s suitability for the target audience of university health students, the Assistive Technology Assessment Instrument (IATA) was utilized, and open-ended questions were formulated to evaluate facilitators and barriers. The instrument used in this study was originally designed to evaluate educational technology (ET) for an audience with visual impairments. Given that the same instrument might not be reliable under different conditions, such as the population it is applied to , the internal consistency of the instrument in the student sample was assessed. A Cronbach’s alpha coefficient of 0.93 indicated good reliability of the instrument’s items for this population. Consequently, through meetings with the CPD team, a weekly schedule was organized to send electronic correspondence with invitations to all students enrolled during the data collection period. The Undergraduate Course Coordination of the institution also contributed by sending an invitation email to the classes. Moreover, the project team promoted the research on social networks. Initially, students received the sociodemographic characterization instrument for the target population. After their first access to the video clip, the IATA was made available, comprising fourteen questions distributed across four attributes: interactivity; objectives; relevance and effectiveness; and clarity. Five open-ended questions were included to capture aspects that participants deemed positive or negative, suggestions for adapting the video clip to the local context, opinions on using audiovisual technologies in the learning process, and technical aspects of access across different devices. The instrument underwent a pilot test with students affiliated with the Research Group, leading to necessary adjustments in the open-ended questions. Analysis of Results and Statistics The Assistive Technology Assessment Instrument (IATA) allows for the evaluation of each attribute, with scores ranging from 0 to 2, defined as follows: inadequate (when the technology does not meet the item’s definition), partially adequate (when the technology partially meets the item’s definition), and adequate (when the technology fully meets the item’s definition) . An attribute was deemed inadequate if the average score was 0; partially adequate if the average score ranged from 0.1 to 1; and adequate if the average score varied from 1.1 to 2 . The average of these attributes led to the overall classification of the video clip’s adequacy. Cronbach’s Alpha was computed to verify the internal consistency of the scale items, considering a value above 0.7 as ideal . The Mann-Whitney test was employed to examine the association between binary variables (gender, age, undergraduate course, possession of a course covering lactation physiology content, completion of such a course, and self-assessed prior knowledge of the content). The level of significance was set at 5% (p < 0.05). Responses to the open-ended questions underwent categorization . To maintain the confidentiality of participant identities in the presentation of qualitative data, a coding system was utilized, denoting each entry with the letter “E” followed by a sequential number. This study is part of the thesis entitled “Evaluation of a Video Clip for Learning Lactation Physiology by Undergraduate Health Students” , approved by the Human Research Ethics Committee of the Federal University of Santa Maria, in accordance with Resolution No. 466/12 of the Ministry of Health . A cross-sectional study was conducted with online data collection, using tools provided through the Student Portal, with assistance from the Data Processing Center (CPD) of the higher education institution where the study was developed, located in Southern Brazil. Data collection took place between May and September 2021. There were 2,475 students enrolled in the eight undergraduate health courses at the institution: Nursing, Pharmacy, Physiotherapy, Speech Therapy, Medicine, Nutrition, Dentistry, and Occupational Therapy. It was determined that for the technology to be classified as adequate, 55% of the sample needed to rate it as good . Based on the mentioned population, a 10 percentage point margin of error, and a 95% confidence level, at least 92 participants were required in the sample. This calculation was performed using the WINPEPI 11.65 program . Inclusion criteria considered were undergraduate health students from a public higher education institution. All health courses were included due to the multi-professional nature of breastfeeding support, aligning with global policies for the promotion, protection, and support of breastfeeding, which necessitate training on the topic. No criteria were established regarding academic aspects and performance. The final sample consisted of 88 students, without course stratification. This study is part of a broader Knowledge Translation project, based on a model developed in Canada, known as Knowledge Translation . This project type proposes the application of evidence in various care practice settings and comprises two cycles: a creation cycle, where evidence synthesis is developed and technologies can be created, and an application cycle, which encompasses six phases: adapting knowledge to the local context; evaluating barriers and facilitators for knowledge use; selecting, tailoring, and implementing interventions; monitoring knowledge use; evaluating the impact; and sustaining knowledge use . In the current study, where the video clip was evaluated by students, one phase of the application cycle of the model was addressed: identifying barriers and facilitators to knowledge use. Notably, the video clip was validated by experts in two preceding methodological studies in the creation cycle of the knowledge translation model: the first to create and validate musical content and the second to create and validate imagery content . Its creation was a commitment to translating the complex and abstract knowledge of lactation physiology, aiming to introduce this content and mediate the target audience’s learning to complement the actions of promoting and supporting breastfeeding. The video clip has a duration of 2 minutes and 34 seconds. As a product of the knowledge translation project, the video clip was developed under the researchers’ guidance by professionals from various fields working in the Educational Technology Coordination (CTE) and the Music Department, located at the institution affiliated with the researchers. The final product, titled “Lactashow: The Lactation Cycle,” was registered and is freely accessible at: https://www.youtube.com/watch?v=dhiUfNXu7AE . To meet the study’s objective of assessing the video clip’s suitability for the target audience of university health students, the Assistive Technology Assessment Instrument (IATA) was utilized, and open-ended questions were formulated to evaluate facilitators and barriers. The instrument used in this study was originally designed to evaluate educational technology (ET) for an audience with visual impairments. Given that the same instrument might not be reliable under different conditions, such as the population it is applied to , the internal consistency of the instrument in the student sample was assessed. A Cronbach’s alpha coefficient of 0.93 indicated good reliability of the instrument’s items for this population. Consequently, through meetings with the CPD team, a weekly schedule was organized to send electronic correspondence with invitations to all students enrolled during the data collection period. The Undergraduate Course Coordination of the institution also contributed by sending an invitation email to the classes. Moreover, the project team promoted the research on social networks. Initially, students received the sociodemographic characterization instrument for the target population. After their first access to the video clip, the IATA was made available, comprising fourteen questions distributed across four attributes: interactivity; objectives; relevance and effectiveness; and clarity. Five open-ended questions were included to capture aspects that participants deemed positive or negative, suggestions for adapting the video clip to the local context, opinions on using audiovisual technologies in the learning process, and technical aspects of access across different devices. The instrument underwent a pilot test with students affiliated with the Research Group, leading to necessary adjustments in the open-ended questions. The Assistive Technology Assessment Instrument (IATA) allows for the evaluation of each attribute, with scores ranging from 0 to 2, defined as follows: inadequate (when the technology does not meet the item’s definition), partially adequate (when the technology partially meets the item’s definition), and adequate (when the technology fully meets the item’s definition) . An attribute was deemed inadequate if the average score was 0; partially adequate if the average score ranged from 0.1 to 1; and adequate if the average score varied from 1.1 to 2 . The average of these attributes led to the overall classification of the video clip’s adequacy. Cronbach’s Alpha was computed to verify the internal consistency of the scale items, considering a value above 0.7 as ideal . The Mann-Whitney test was employed to examine the association between binary variables (gender, age, undergraduate course, possession of a course covering lactation physiology content, completion of such a course, and self-assessed prior knowledge of the content). The level of significance was set at 5% (p < 0.05). Responses to the open-ended questions underwent categorization . To maintain the confidentiality of participant identities in the presentation of qualitative data, a coding system was utilized, denoting each entry with the letter “E” followed by a sequential number. Eighty-eight undergraduate health students participated in the evaluation of the video clip. The majority were female (71.6%, n=63), and the predominant age group was 18 to 29 years (85.2%, n=75). Regarding their enrolled courses, Nursing had the highest incidence (28.5%, n=25), followed by Pharmacy (18.3%, n=16), Speech Therapy (12.5%, n=11), Occupational Therapy (10.3%, n=9), Medicine (9%, n=8), Nutrition (9%, n=8), Physiotherapy (7.9%, n=7), and Dentistry (4.5%, n=4). Most participants did not report any disabilities (94.3%, n=83) and indicated a high use of digital devices and the internet (99.1%, n=87). The video clip was evaluated by the target audience of university health students as adequate in all attributes . This indicates that the video clip promotes engagement in the learning process autonomously, as it can be accessed at any time by the student, according to their needs in clinical practice and academic life. For science, the suitability of an educational technology for the student target audience represents an advancement in the frontier of knowledge on the topic of breastfeeding (AM), as it translates the knowledge of lactation physiology, a content considered complex and abstract, yet essential for understanding the breastfeeding process. In this study, the video clip was evaluated as adequate, as were all its other attributes . In the attribute of interactivity, the item “allows easy access to the presented topics” scored the highest. Regarding the objective attribute, the video clip was most highly rated for “stimulating learning about the covered content.” In relevance and efficacy, the highest score was for the video clip “providing adequate and necessary resources for its use.” Lastly, in the attribute of clarity, the item “enables reflection on the covered content” received the highest score in the evaluation of the video clip. There was no significant difference in the comparison of attributes between the independent sociodemographic variables and prior knowledge of lactation physiology . This indicates to educators the suitability of the technological product for their pedagogical practice, as they can use the video clip without it being dependent on academic profile or performance context for application in a learning environment. Considering that there are other supporting technologies for this process, using these in a complementary manner in training could mediate learning to achieve the global goal of exclusive breastfeeding rates, which are currently below expectations. In the qualitative assessment, 31 participants indicated facilitators for use, such as the attractiveness and retention promoted by the musical and animation features, technical aspects of ease of access, learning aspects, among others . Students noted aspects they considered positive about the music: The music is an excellent tool for attraction. (E5) It has the capacity to engage. (E10) It is easily memorable. (E10, E12) It remains memorable even after just one listen. (E25) The music has a pleasant quality. (E25) In terms of the positive aspects of the visual content, students highlighted: The inclusion of the family in the breastfeeding process depicted at the start of the video. (E7) The attractive and humorous imagery. (E14, E15, E24, E31) The transition from the real world to the animated one. (E25) The personification of hormones as entities that contribute to the functioning of lactation physiology. (E26) Regarding the use of audiovisual technologies in the learning process, students observed that the video clip: Facilitates enjoyable learning. (E1) Is easy to understand. (E1, E9, E11, E13, E20) Features attractive teaching methods. (E7, E13, E22) Stimulates learning in various ways. (E1, E11, E25) Effectively combines audio and visual elements to enhance content retention. (E9, E10, E11, E14, E17) Aligns with the new teaching paradigm, utilized in a hybrid approach. (E8, E22, E24, E29, E31) As for the technical aspects, students identified the following positives: The brief duration of the video. (E9) Accessibility to a diverse audience (E12, E15), including those with hearing impairments due to the inclusion of subtitles. (E10) Ease of access (E25), including availability on social media platforms. (E15) The video does not require a highspeed internet connection for viewing. (E20) It was evident that the musical and imagery content, which comprise the animation of the video clip, was considered attractive by the students, indicating the potential for revisiting the content. The students believe that the characters in the video clip, representing the hormones involved in lactation physiology, concretize the content and facilitate learning, and that the musicality aids in the retention of knowledge. Regarding technical aspects, the availability of open access and the short duration facilitate its use in teaching practices. Additionally, participants reported barriers to use, such as the speed of the music and the need for prior knowledge . Students identified technical issues such as access difficulties for those without internet (E29) and considered certain aspects of the music and visual content as negative: The music is too fast considering the amount of information in the lyrics. (E6, E24) The speed at which the images change is a factor that makes understanding difficult. (E6, E26) Difficulty in paying attention to both audio and visual elements at the same time. (E6) Images differ from those presented in lactation physiology textbooks. (E15) Regarding the learning process, students noted that: Prior knowledge is necessary. (E3) Just watching the video makes it confusing to understand the action of each hormone. (E26) The issues related to barriers allow us to indicate the music video as a mediator of learning that requires prior knowledge and more than one approach to the object of study. There was no indication from the students to adapt the music video to the local context. The video clip was deemed appropriate by the participants, achieving an overall average of 1.72 across four attributes: interactivity, objective, clarity, and relevance. These positive outcomes indicate acceptance of the video clip and demonstrate the feasibility of using this technology in teaching, thereby promoting student autonomy, as the video clip can be accessed freely. In terms of the interactivity attribute, the video clip, considering both its audio and animation, meets the students’ needs for engagement in the educational process, facilitating easy access to topics in lactation physiology. A similar study evaluated an app for teaching and learning the Portuguese language for the hearing impaired , also using the same evaluation instrument, and deemed it appropriate in the interactivity attribute . This outcome suggests that the app provides users with autonomy in learning a second language . Developing and incorporating interactivity in any technology acknowledges its potential to foster active learning. The appropriateness in the objective attribute suggests that the video clip effectively supports the learning of the physiological process of human lactation. Another study, evaluating technology in two countries, also observed positive results in this attribute among Brazilian and Portuguese participants . The study involving the hearing impaired also found the technology appropriate in the “objective” attribute . For technological learning to be impactful, it should integrate seamlessly into the individual’s cognitive structure in a non-arbitrary way . Understanding physiology is vital to comprehend breastfeeding management, as hormone interactions positively influence milk maintenance and production. Thus, this learning is crucial in offering support aligned with theoretical content on milk production. Moreover, the video clip’s engaging strategy, which includes the use of music and animation to depict hormones involved in lactation physiology, was rated as suitable. Facilitators for using the video clip, as indicated by the participants, included its pleasant musicality, audiovisual aspect, appeal, and ease of understanding, showcasing its potential for usability. The video clip’s capacity to engage students in the learning process through its musical elements aligns with a study where researchers noted that music makes content more relatable, engaging students and encouraging them to develop critical and reflective thinking . Videos and music are regarded as motivational tools in the teaching-learning process, particularly when used in conjunction . The clarity attribute pertains to how well the video clip’s content meets students’ needs in comprehending a complex and abstract subject, signifying that this educational technology facilitates understanding of the physiological process of human lactation. A parallel study that applied the same evaluation method for assistive technology also reported high scores in the clarity attribute . When material is easily comprehensible, it possesses substantial potential. For learning to be significant, learners need to meaningfully connect with the material, logically integrating it into their cognitive framework . The perceived clarity of the video clip is remarkable, especially given the complexity of lactation physiology, which involves hormone interactions, mammary tissue development, and breast milk production, and is an abstract concept that is challenging to grasp due to its intangible nature. The video clip was considered appropriate by participants, achieving a global average of 1.72 across four attributes: interactivity, objective, clarity, and relevance. These positive results indicate acceptance of the video clip and the feasibility of using this technology in teaching, as it promotes student autonomy due to the free accessibility of the video clip. The assessment of relevance showed that the resources of the video clip are sufficient to spark students’ interest in its use for learning lactation physiology. This is consistent with an assessment conducted by Brazilian and Portuguese participants, where both groups achieved an average of 1.65 in the same attribute . However, the results suggest that while the video clip is adequate, it may not fully stimulate behavioral changes in research participants, as only 55.7% rated this attribute as adequate. This implies that while the video clip can be utilized, it is advisable to combine it with innovative pedagogical practices and the educator’s experience in an active learning process that can influence discussions of real professional practice situations where this knowledge will be applied. Health students must comprehend the physiological processes involved in lactation to effectively support breastfeeding (AM) in professional practice. Yet, some health education programs do not include AM topics in their curricula. A study in the United States indicated that 71% of pediatric and obstetric medical professionals feel insecure about advising on AM, and many still recommend weaning in unnecessary situations. Moreover, these professionals are often unaware of the physiological processes that occur from gestation to milk “let-down” . It is crucial to note that AM guidance should start in prenatal care and be reinforced during the childbirth process. Inadequate or incomplete guidance during prenatal care, coupled with a lack of support during labor and delivery, increases the likelihood of early weaning . Thus, it is essential for health students to understand the physiological processes of lactation to effectively participate in multidisciplinary teams in various clinical settings. Participants identified as a facilitator the potential of the video clip to adapt to a hybrid teaching model. This is particularly relevant in the context of the Covid-19 pandemic experienced by students during the data collection period of this study. With social distancing recommendations, the academic community had to transition from entirely in-person to remote classes . When organized and executed properly, hybrid teaching can lead to meaningful learning. Regarding the barriers to using the evaluated technology, the speed of the music in relation to the large amount of information presented was seen as a potential limitation in the learning process. In addition, participants indicated the necessity of prior knowledge to understand the content, suggesting that the video clip might act as a subsumer, in line with Ausubel’s learning theory . Evaluating the barriers was essential to ensure the effective use of this technology in its intended context. The analysis of the two barriers reported by participants is consistent with the theory of meaningful learning: for learning to be effective, it is vital to understand what the learner already knows so that the new content has logical significance This underscores the notion that the use of technologies by educators enhances students’ comprehension of content, a positive finding also demonstrated in other studies . It is critical to consider the identified barriers in different local contexts to ensure the continued use of the technology. Given the evaluation of the need to incorporate other audiovisual technologies into the learning process, we emphasize the importance of continuous monitoring of the use of the technology in this study, to understand its impact in the context where it is implemented. This approach is in line with the Knowledge Translation Model, highlighting the researcher’s role in navigating the creation-action cycle phases, aiming to sustain the tool’s use in the intended context. Study limitations Firstly, the low participation rate among students, attributed to the Covid-19 pandemic context and the subsequent transition to remote classes, impacted the study’s adherence. Despite using online questionnaires to facilitate remote access, student responses fell below expectations. Additionally, a critical limitation was the exclusion of students with visual or auditory disabilities from data collection, since the video clip was not developed with accessibility features for these groups. As a result, the findings regarding the video clip’s suitability for undergraduate health students are not generalizable to these specific populations. Therefore, while the study’s findings are relevant, they should be interpreted with caution due to these significant constraints. Contributions to the Fields of Nursing and Education The study’s findings can contribute to the recognition of systematic technology evaluation by end-users, pointing to possibilities for local context adaptations and usability enhancements. The study also aids in guiding the adoption of hybrid teaching strategies in health education, particularly in breastfeeding education. Notably, even though technological prospecting is a long-term endeavor, the research group sent a communication with the access link to this product to the Brazilian Nursing Association (ABEN Nacional), requesting its broad dissemination among Brazilian universities to support evidence-based and hybrid teaching. Furthermore, a promotional poster with the link and QR Code for accessing the video clip was sent to services connected to the Regional Health Coordination. This initiative to promote breastfeeding also aimed to support evidence-based clinical practice with technology and innovation. In the broader knowledge translation project, maintaining the use of this knowledge product is a key strategy. Firstly, the low participation rate among students, attributed to the Covid-19 pandemic context and the subsequent transition to remote classes, impacted the study’s adherence. Despite using online questionnaires to facilitate remote access, student responses fell below expectations. Additionally, a critical limitation was the exclusion of students with visual or auditory disabilities from data collection, since the video clip was not developed with accessibility features for these groups. As a result, the findings regarding the video clip’s suitability for undergraduate health students are not generalizable to these specific populations. Therefore, while the study’s findings are relevant, they should be interpreted with caution due to these significant constraints. The study’s findings can contribute to the recognition of systematic technology evaluation by end-users, pointing to possibilities for local context adaptations and usability enhancements. The study also aids in guiding the adoption of hybrid teaching strategies in health education, particularly in breastfeeding education. Notably, even though technological prospecting is a long-term endeavor, the research group sent a communication with the access link to this product to the Brazilian Nursing Association (ABEN Nacional), requesting its broad dissemination among Brazilian universities to support evidence-based and hybrid teaching. Furthermore, a promotional poster with the link and QR Code for accessing the video clip was sent to services connected to the Regional Health Coordination. This initiative to promote breastfeeding also aimed to support evidence-based clinical practice with technology and innovation. In the broader knowledge translation project, maintaining the use of this knowledge product is a key strategy. The video clip is an interactive, objective, clear, and relevant tool, suitable for use in pedagogical practice with undergraduate health students. This educational technology, which translated complex and abstract knowledge of lactation physiology, can serve as a learning strategy that enhances hybrid teaching in training. The positive assessment of its suitability and facilitators, such as attractiveness, memorability promoted by the video clip, and ease of access, highlight the tool’s potential to introduce lactation physiology content and facilitate learning. This complements the actions of promoting and supporting breastfeeding, enabling autonomous professional practice. The target audience’s perception that there is no need to adapt the video clip to the local context suggests the potential for applying this educational technology in undergraduate health courses at Public Higher Education Institutions. |
Comparison of the Effects of Different Palatal Morphology on Maxillary Expansion via RME and MSE: A Finite Element Analysis | 5bc35ce6-f5a2-4c2e-8dd3-08ee99f89631 | 11411151 | Dentistry[mh] | Introduction Maxillary transverse deficiency (MTD) is a common clinical problem. Inadequate maxillary width has been reported in 9.4% of the population and nearly 30% of adult patients (Brunelle, Bhat, and Lipton ). MTD often leads to severe malocclusion, such as crowding and crossbite, which not only affect occlusal function and esthetics, but also may cause functional problems such as upper airway narrowing, increased nasal airway resistance, and altered tongue position (McNamara et al. ). Clinical patients with high‐narrow palate, narrowing of the dental arch, and skeletal class III patients with insufficient maxillary development usually require maxillary expansion treatment. Maxillary rapid expansion has been widely used in the treatment of insufficient maxillary width, and its expansion effect comes mainly from three parts: Expansion of the mid‐palatal suture, expansion of the alveolar bone, and tipping movement of the teeth (Liu, Xu, and Zou ). However, many undesirable side effects of conventional RME have been identified, such as dentoalveolar tipping, drop of the palatal cusp, increase of the Wilson curve, root resorption, decrease of the level of alveolar bone, gingival recession, and periodontal dehiscence (Garib et al. ; Lemos Rinaldi et al. ; Baysal et al. ; Lo Giudice et al. ). To increase the orthopedic effect and reduce the side effects of traditional RME, surgically assisted rapid maxillary expansion (SARME) and various bone‐borne anchorage devices have been introduced and have shown clinical success (Lee et al. ; Gunyuz Toklu, Germec‐Cakan, and Tozlu ; Lin et al. ; Asscherickx et al. ). However, SARME is an invasive process with greater trauma and pain. Most of the currently available expanders are hybrid and are composed of both miniscrews and tooth‐borne parts. MARPE is either a tooth‐bone‐borne or a solely bone‐borne RPE device with a rigid element that connects to miniscrews inserted into the palate, delivering the expansion force directly to the basal bone of the maxilla (Lee et al. ). It was designed to maximize skeletal effects and to minimize dentoalveolar effects of expansion, based on the findings of previous histological studies revealing that the midpalatal suture does not fully ossify in humans even at an elderly age, possibly due to the constant mechanical stress that it undergoes (N'Guyen, Ayral, and Vacher ; Poorsattar Bejeh Mir et al. ). Maxillary skeletal expander (MSE) is a particular type of MARPE that can achieve orthopedic expansion of the palate by bicortical mini‐implant anchorage even in adults (Lee, Moon, and Hong ). In clinical practice, patients with mouth breathing often have narrow dental arches, high arched palate, and protruding upper front teeth, while patients with skeletal type III often suffer from underdevelopment of the maxilla and overdevelopment of the mandible. Although their palate morphology differs, they often both have imbalances in the width of the upper and lower dental arches and require arch expansion. There are differences in the morphology of the palate in patients, which makes differences in the location of the expander and the effect of the maxillary expansion, and there is currently a lack of research on the influence and mechanism of the palatal morphology on the effect of expansion. Although the literature has reported the influence of palatal depth on the mechanical effect and displacement trend of the maxillary body and the dentition during the expansion (Matsuyama et al. ), it only established the maxilla model rather than a three‐dimensional model of the craniomaxillofacial complex, so its simulation effect is limited. In this study, the palatal index (PI) was used to objectively evaluate the shape of the palate, avoiding the influence of subjective factors and visual errors (Paul and Nanda ). The palatal morphology is classified by calculating the ratio of the palatal height to the palatal width between the maxillary first and second premolars. Based on this index, a craniomaxillary complex model with normal palatine and high‐arched palatine was established. Mechanical distribution properties of the craniomaxillary complex resulting from maxillary expansion cannot be obtained by using traditional cephalometric appraisals. Finite element analysis provides a proven method that can achieve structural simulation and mechanical analysis and has been used extensively in research on maxillary expansion (Lee, Moon, and Hong ). Finite element analysis can replace complex structures with a finite number of elements with simple geometric shapes, and play an essential role in the field of medical biomechanics (Panagiotopoulou ). Therefore, this study aims to simulate the mechanical effects of RME and MSE on craniomaxillofacial complex and bone sutures and analyze the displacement of the maxilla, mid‐palatal suture, and dentition using three‐dimensional finite element models of different palate morphologies. The results of the present study can provide a certain theoretical basis for the selection, design, and practical application of RME and MSE in clinical practice.
Materials and Methods A patient with normal palatal morphology (PI = 36%) was selected as the subject for this study. The subject was approved by the hospital's ethical committee and the patient has signed an informed consent. The finite element model of the craniomaxillofacial complex was generated using volumetric data from the CBCT scan (slice thickness, 0.3 mm) using Mimics software (version 20.0; Materialise, Belgium). First, determine the threshold of teeth and bone tissue, reduce noise, extract the corresponding image data as accurately as possible, segment it with other structures such as soft tissue, and establish a preliminary craniomaxillary complex mask. Then, use region growing to concentrate pixels with similar properties to eliminate noise, soft tissue, and artifacts. Use mask editing to erase unnecessary parts, leaving only the craniomaxillary complex, fill in needed but discontinuous structures, and finally establish an accurate three‐dimensional structure of the craniomaxillary complex. The reconstructed model was exported to 3‐matic Research (version 12.0; Materialise, Belgium) where frontomaxillary, zygomaticomaxillary, zygomaticotemporal, pterygopalatal, and mid‐palatal sutures were marked and the thickness of the sutures was considered to be 0.5 mm (Fricke‐Zech et al. ); then, the part of the suture surface was removed based on Boolean operation and a model structure similar to the physiological structure was obtained. The completed bone suture model was exported in stl format and then the stl file was imported into Geomagic Studio 2014 (Geomagic, American) to construct the normal‐palate entity model and perform noise reduction and model trimming. The periodontal ligament (PDL) was modeled on the root shape with an average thickness of 0.25 mm. In this study, PI is used to objectively describe the palate morphology. Redman, Shapiro, and Gorlin first described the PI for palatal measurements and established standards of palatal dimension and shape to compare the reportedly malformed palates. The index indicates the relative height or narrowness of a palate and has been used in some studies on palatal morphology (Aluru et al. ). PI is the ratio of the palatal height to the palatal width between the first premolar and the second premolar, and it can be regarded as high‐narrow palate when the PI > 41% (Howell ; Perkiömäki and Alvesalo ). Then, the high‐palate model was obtained by the reconstruction of the palatal morphology of the craniomaxillofacial complex using Geomagic Studio 2014 with PI = 50% (Figure ). In the ANSYS Workbench software (version 19.0; ANSYS, American), corresponding Hyrax and MSE entity models based on physical dimensions were established on the normal‐palate and high‐palate craniomaxillofacial models, respectively. The four implants of MSE were constructed as cylindrical structures (length, 11.0 mm; diameter, 1.5 mm) and were implanted perpendicular to the bone surface at the outer 3 mm of the mid‐palatal suture (Park et al. ). The Hyrax arms are connected to the lingual surface of the first premolars and the first molars, respectively, and the MSE arms are connected to the lingual surface of the first molars. Four models were made by assembling the craniomaxillofacial complexes and the expanders in Ansys 19.0 (Figure ). Model 1: Normal‐palate craniomaxillofacial complex with RME expander Model 2: Normal‐palate craniomaxillofacial complex with MSE expander Model 3: High‐palate craniomaxillofacial complex with RME expander Model 4: High‐palate craniomaxillofacial complex with MSE expander Four‐noded tetrahedral elements were used for volumetric mesh generation using Ansys 19.0 (Figure ). The maxilla, alveolar bone, dentition, and bone sutures were sectioned into 1 mm tetrahedrons aimed at increasing the accuracy of models. Other parts of the craniomaxillofacial complex were sectioned into 5 mm tetrahedrons (Meng et al. ; Priyadarshini et al. ). The modulus of elasticity and Poisson's ratio for cortical bone, cancellous bone, bone sutures, teeth, implants, and expanders were defined (Table ). The connection relationship among different structures was set (Table ). The three‐dimensional coordinates were x (horizontal plane), y (sagittal plane), and z (vertical plane) and the positive values were set rightward, backward, and upward. Model material properties were defined as homogeneous, continuous, and isotropic based on previous research, and many studies have shown the accuracy of the finite element analysis which simulates the mechanical behavior of complex biologic structures (Park et al. ; Bezerra et al. ). The expanders were activated by applying 0.5 mm of transverse forced displacement along the X ‐axis in the four models, 0.25 mm on each side. Five landmarks of palate were measured to evaluate the displacement of mid‐palatal suture, which are on the level of the cusp of the canine, the buccal cusp of the first premolar, the buccal cusp of the second premolar, the mesiopalatal cusp of the first molar, and the mesiopalatal cusp of the second molar, respectively. The points are also on the parallel line of the mid‐palatal suture at the midpoint of the lingual cervical margin of the central incisor (Figure ). Several landmarks of teeth were measured to evaluate the three‐dimensional displacement of dentition. The occlusal landmarks are the midpoint of the incisal edge, the cusp of canine, the buccal cusp of premolar, and the mesiobuccal and mesiopalatal cusp of molar. The radicular landmarks are the apex of the anterior teeth, the apex of the buccal root of the premolar, the apex of the mesiobuccal, and the palatal root of the molar (Figure ). These landmarks could be visualized clearly and located accurately in three‐dimensional finite element models of cranial‐maxillary complex and ensure measurement data validation and accuracy (Park et al. ; Eom et al. ).
Results 3.1 Von Mises Stress Distribution on Craniofacial Complex Through the comparative analysis of the equivalent stress nephograms of the four models (Figures and ), it is found that Model 3 has the smallest equivalent stress on the craniomaxilla, but the frontal process of the maxilla, the medial orbital wall, the orbital floor, and buccolingual alveolar region of the anchoring teeth have obvious stress concentration. The stress on the craniomaxillary complex in Model 1 is slightly greater than that in Model 3. The maximum equivalent stress is located in the buccal alveolar process of the maxillary first premolar, in addition, the periphery of the piriform foramen, the frontal process of the maxilla, and the medial orbital wall also have stress distribution. The stress on Model 2 and Model 4 is significantly greater than that on Model 1 and Model 3, and the stress is concentrated in the area around the implant nails, and the two implant nails in the front are stressed more than the rear. 3.2 Mechanical Distribution of Dentition and Alveolar Bone The maximum equivalent stress of the dentition in the four models is concentrated on the neck region of the anchoring tooth. There are differences in the force of the dentition with different expanders, and MSE can reduce the stress concentration on the neck of the anchored teeth. Otherwise, the shape of the palate has a great influence on the force of the dentition. It can be found that the force on the dentition of the model with a high arch of the palate is reduced compared with a normal arch, but their mechanical distribution on the dentition is similar. As shown in Figure , the equivalent stress of the alveolar bone in Model 1 and Model 3 is mainly concentrated on the buccal and lingual bone wall of the anchoring tooth, while that in Model 2 and Model 4 is effectively reduced. In addition, no matter RME or MSE, the force of the alveolar bone in the high‐arch palatal group was lower than that in the normal palatal‐arch group. 3.3 Maximum Principal Stresses on Sutures As shown in Figure , from top to bottom the sutures are, in order, mid‐palatal suture, frontomaxillary suture, zygomaticomaxillary suture, zygomaticotemporal suture, and pterygopalatal suture. There are differences in the maximum principal stress of each suture in the four models and the maximum principal stress of MSE expansion on the medial palatine suture and pterygopalatine suture is much larger than that of RME expansion (Table ). 3.4 Transverse Displacements of Palatal Sutures The lateral displacement of the palatal suture in Model 2 and Model 4 is significantly larger than that in Model 1 and Model 3. The mid‐palatal sutures of Model 1 and Model 3 are expanded in a “V” shape that is relatively broad at the anterior part and narrow at the posterior part, and the expansion of the mid‐palatal suture of Model 3 is greater than that of Model 1. However, the expansion of the anterior and posterior parts in Model 2 and Model 4 is basically the same, and the posterior part is slightly larger than the anterior part (Table ). From the frontal view (Figure ), it can be seen that the expansion of the upper and lower palatal sutures of Model 2 and Model 4 is basically the same, while the expansion of the upper part of the palatal suture in Model 1 and Model 3 is significantly smaller than that of the lower part, which is a V‐shaped pattern that is narrow in the upper part and wide in the lower part. 3.5 Comparison of Displacements of Maxillary There were larger lateral displacements in all four models, which in Model 2 and Model 4 were significantly larger than those in Model 1 and Model 3. In the sagittal direction, the displacement of the maxillary in the four models is small, but there are differences in the displacement trends. In Model 1, the sagittal displacement trend of maxillary dentition is slightly backward while that of maxillary zygomatic process is forward. In Model 2, the upper incisors with its alveolar bone move a little backwards, and other parts of the maxillary complex have a forward movement in the sagittal direction. The maxilla in Model 3 shows forward movement in the sagittal direction. The Model 4 has no obvious displacement in the sagittal direction. Compared with the other three groups, Model 3 has a more obvious forward‐outward rotation trend (Figures and ). 3.6 Three‐Dimensional Displacement of Dentition The amount of three‐dimensional displacement of dentition is shown in Table . In the X ‐axis direction, the displacement trends of the crown and root in each model are consistent. The movement of the crown and root landmarks in Model 2 and Model 4 is significantly larger than that in Model 1 and Model 3. The displacements of crowns and roots in Model 3 are larger than those in Model 1, and the difference in root displacement is greater. The crown‐to‐root ratio in the X ‐axis direction in Model 1 is significantly larger than that in the other three groups, and the crown‐to‐root ratio in Model 4 is the smallest (Figure ). The sagittal displacement in Model 3 and Model 4 is larger than that in Model 1 and Model 2, and the direction is basically along the negative direction of the Y ‐axis. In the Z ‐axis direction, the moving directions of the crowns and roots in Model 1 and Model 3 are both negative directions, which means tooth extrusion. However, the crowns and roots of the posterior teeth in Model 2 and Model 4 moved along the positive direction of the Z ‐axis, showing the intrusion of the teeth. The intrusion of the posterior teeth in Model 4 is greater than that of Model 2.
Von Mises Stress Distribution on Craniofacial Complex Through the comparative analysis of the equivalent stress nephograms of the four models (Figures and ), it is found that Model 3 has the smallest equivalent stress on the craniomaxilla, but the frontal process of the maxilla, the medial orbital wall, the orbital floor, and buccolingual alveolar region of the anchoring teeth have obvious stress concentration. The stress on the craniomaxillary complex in Model 1 is slightly greater than that in Model 3. The maximum equivalent stress is located in the buccal alveolar process of the maxillary first premolar, in addition, the periphery of the piriform foramen, the frontal process of the maxilla, and the medial orbital wall also have stress distribution. The stress on Model 2 and Model 4 is significantly greater than that on Model 1 and Model 3, and the stress is concentrated in the area around the implant nails, and the two implant nails in the front are stressed more than the rear.
Mechanical Distribution of Dentition and Alveolar Bone The maximum equivalent stress of the dentition in the four models is concentrated on the neck region of the anchoring tooth. There are differences in the force of the dentition with different expanders, and MSE can reduce the stress concentration on the neck of the anchored teeth. Otherwise, the shape of the palate has a great influence on the force of the dentition. It can be found that the force on the dentition of the model with a high arch of the palate is reduced compared with a normal arch, but their mechanical distribution on the dentition is similar. As shown in Figure , the equivalent stress of the alveolar bone in Model 1 and Model 3 is mainly concentrated on the buccal and lingual bone wall of the anchoring tooth, while that in Model 2 and Model 4 is effectively reduced. In addition, no matter RME or MSE, the force of the alveolar bone in the high‐arch palatal group was lower than that in the normal palatal‐arch group.
Maximum Principal Stresses on Sutures As shown in Figure , from top to bottom the sutures are, in order, mid‐palatal suture, frontomaxillary suture, zygomaticomaxillary suture, zygomaticotemporal suture, and pterygopalatal suture. There are differences in the maximum principal stress of each suture in the four models and the maximum principal stress of MSE expansion on the medial palatine suture and pterygopalatine suture is much larger than that of RME expansion (Table ).
Transverse Displacements of Palatal Sutures The lateral displacement of the palatal suture in Model 2 and Model 4 is significantly larger than that in Model 1 and Model 3. The mid‐palatal sutures of Model 1 and Model 3 are expanded in a “V” shape that is relatively broad at the anterior part and narrow at the posterior part, and the expansion of the mid‐palatal suture of Model 3 is greater than that of Model 1. However, the expansion of the anterior and posterior parts in Model 2 and Model 4 is basically the same, and the posterior part is slightly larger than the anterior part (Table ). From the frontal view (Figure ), it can be seen that the expansion of the upper and lower palatal sutures of Model 2 and Model 4 is basically the same, while the expansion of the upper part of the palatal suture in Model 1 and Model 3 is significantly smaller than that of the lower part, which is a V‐shaped pattern that is narrow in the upper part and wide in the lower part.
Comparison of Displacements of Maxillary There were larger lateral displacements in all four models, which in Model 2 and Model 4 were significantly larger than those in Model 1 and Model 3. In the sagittal direction, the displacement of the maxillary in the four models is small, but there are differences in the displacement trends. In Model 1, the sagittal displacement trend of maxillary dentition is slightly backward while that of maxillary zygomatic process is forward. In Model 2, the upper incisors with its alveolar bone move a little backwards, and other parts of the maxillary complex have a forward movement in the sagittal direction. The maxilla in Model 3 shows forward movement in the sagittal direction. The Model 4 has no obvious displacement in the sagittal direction. Compared with the other three groups, Model 3 has a more obvious forward‐outward rotation trend (Figures and ).
Three‐Dimensional Displacement of Dentition The amount of three‐dimensional displacement of dentition is shown in Table . In the X ‐axis direction, the displacement trends of the crown and root in each model are consistent. The movement of the crown and root landmarks in Model 2 and Model 4 is significantly larger than that in Model 1 and Model 3. The displacements of crowns and roots in Model 3 are larger than those in Model 1, and the difference in root displacement is greater. The crown‐to‐root ratio in the X ‐axis direction in Model 1 is significantly larger than that in the other three groups, and the crown‐to‐root ratio in Model 4 is the smallest (Figure ). The sagittal displacement in Model 3 and Model 4 is larger than that in Model 1 and Model 2, and the direction is basically along the negative direction of the Y ‐axis. In the Z ‐axis direction, the moving directions of the crowns and roots in Model 1 and Model 3 are both negative directions, which means tooth extrusion. However, the crowns and roots of the posterior teeth in Model 2 and Model 4 moved along the positive direction of the Z ‐axis, showing the intrusion of the teeth. The intrusion of the posterior teeth in Model 4 is greater than that of Model 2.
Discussion The growth and morphology of the palate are influenced by genetic and environmental factors and are determined by multiple factors, including timing of development, soft tissue, and peak growth periods. There is a close relationship between the shape of the palate, the body of the tongue, and the shape of the dental arch (Hashimoto et al. ; Kurabeishi et al. ; Yu and Gao ), indicating that the shape of the palate is of great significance for clinical diagnosis and treatment. MTD is one of the most common malocclusions for which RME and MSE are usual treatments, but their clinical effects are affected by several factors, one of which is palatal shape. In clinical practice, many adolescent patients with mouth breathing have a narrow‐high palate. Meanwhile, skeletal class Ⅲ patients with maxillary underdevelopment and normal palatal morphology are also common in the clinic. They all need to be treated with maxillary expansion. However, the width and height of the palate may have an impact on the mechanical distribution and effect of the maxillary expansion. Consequently, it is necessary to explore the influence of palatal morphology on the treatment of arch expansion and the PI is used to objectively reflect the relative palatal height and evaluate the palatal morphology. At present, Matsuyama et al. studied the effect of the depth of the palate on the maxillary expansion and established maxillary models with the depth of the palate increased by 4 and 8 mm, respectively. They only established the maxilla model but not the craniomaxillary complex model and found that the model with a high vault has the smallest lateral displacement of teeth, the expansion of the mid‐palatal suture, and the largest deformation of the expanding arm. The results of this study are different, and we found that the lateral displacement of the palatal suture in the high‐arched palatal group was larger during RME expansion, which may be related to the larger stress concentration in the upper part of the craniomaxillary complex and greater rotation of the maxillary body because of its center of resistance moving up. In addition, Its extrusion of the posterior teeth is more obvious. Therefore, the vertical control of posterior teeth should be paid attention to when performing RME expansion in patients with high palatal arch. Previous studies have found that the vertical height of the Hyrax expander will affect the tipping movement of the anchoring teeth. When the vertical height of the expander is level with the center of resistance of the anchoring teeth, the tipping movement of the anchoring teeth is very small. Conversely, tipping movement of the anchoring teeth will occur (Araugio et al. ; Gómez‐Gómez et al. ). The shape of the palate will affect the placement position of the expander, especially the vertical position, which may be an important reason for the different stress distribution and displacement trends of patients with different palate shapes during expansion. These still need to establish more craniomaxillary models with different palatal morphology and expanders for further research to more accurately guide the placement and application of the expander in the clinic. In this study, we found that the high‐palate group has greater palatal suture widening and less tipping movement of anchor teeth, while the normal‐palate group has more pronounced tooth inclination movement, so we should pay attention to the side effects caused by the tipping movement of the teeth when treating a patient with normal palate. Some works of literature have shown that the use of micro‐implants to assist maxilla expansion can effectively expand the palatal base bone and improve the effectiveness of the expansion (Lee et al. ; Lee, Moon, and Hong ). The results of this study found that the shape of the palate had little effect on the expansion effect of MSE, and MSE can achieve greater palatal suture expansion and vertical control and avoid adverse dental effects. Recently, the research on the mechanical effect of maxillary arch expansion is still controversial. This study found that there are stress concentrations around the piriform foramen, the frontal process of the maxilla, the medial orbital wall, and the orbital floor during RME. Not only does the resistance of the midpalatal suture need to be overcome, but also the maxilla and surrounding bone tissue are the sources of resistance to arch expansion, which is consistent with other research results (MacGinnis et al. ). In addition, this study found that the stress in the MSE group was mainly concentrated in the palatal bone, especially the area around the implants, and the two anterior implants were more stressed. Therefore, attention should be paid to prevent the loosening of the anterior implants due to stress concentration. Additionally, stress distribution was also observed around the medial orbital wall, zygomatic bone, and nasal bone, indicating that it also had mechanical effects on craniofacial bone tissue. However, Nelson Elias et al. showed that the palatal bone was subjected to the greatest tension, and the sphenoid pterygoid process was subjected to the greatest pressure. There are still controversies between RME and MSE on the effect of bone sutures. Leonardi et al. found that most of the sutures around the maxilla were affected by RME. Ghoneima et al. believed that the force generated by RME mainly affected the anterior sutures of the craniomaxilla, such as the mid‐palatine suture and front‐maxillary sutures, rather than the posterior sutures, such as the zygomatic‐maxillary sutures. Our study also found that the equivalent stress of the frontonasal suture and the mid‐palatal suture during RME was greater than that of the posterior sutures, which may lead to V‐shaped expansion of the mid‐palatal suture. Some studies have reported that the resistance of the posterior part of the palatal bone increases with age (Melsen and Melsen ; Lee et al. ). The maximum principal stress and equivalent stress of the pterygopalatine suture in the MSE group were greater than those in the RME group, indicating that MSE could better overcome the resistance of the posterior part of the palatine bone, which may be the reason why MSE allows parallel expansion of the mid‐palatal suture and can be applied to adult patients. In this study, finite element analysis was used to study the effect of RME and MSE on the craniomaxillary complex with different palatal morphology. The RME and MSE models were established according to the actual clinical size of the expander, and reasonable material properties were defined for the craniomaxillary structures by consulting a large amount of literature, which can be closer to the actual clinical situation and provides valuable guidance for the selection, placement, and program design of the expander for patients with different palate shapes in the clinic. The limitation of this study is that finite element analysis can only simulate bone tissue and transient effects but not long‐term changes, so the development of finite element is needed to best simulate the real effects of maxillary expansion. In the future, it is possible to try to establish a finite element model with soft and hard tissues of the craniomaxillofacial region under continuous growth to better simulate the clinical practice of maxillary expansion.
Conclusion The simulations of the mechanical effects of RME and MSE on the maxillofacial complex under different palatal shapes with the finite element analysis show the following. 1. Different shapes of the palate interfere with the effects of RME and MSE, and its influence on the stress distribution and displacement of the craniomaxillary complex when using RME is higher than MSE. 2. The lateral displacement of the palatal suture of MSE is significantly larger than that of RME. The mid‐palatine suture in RME expansion is widened in a V‐shape while that in the MSE expansion is evenly expanded, which may be related to the ability that MSE better overcome the resistance of the posterior part of the craniomaxillary complex. 3. It is more prone to tipping movement of the anchor teeth using RME under normal palate, and MSE may manage the vertical control better due to the smaller crown/root ratio than RME and intrusive movement of molars.
Yaohui Pan contributed to the conception and design of this study, performed the main literature retrieval, data acquisition, and statistical analysis, and wrote and revised the manuscript. Wenjing Peng helped perform the literature retrieval, graphic processing, and statistical analysis and drafted the manuscript. Yanyu Wang performed the statistical analysis and helped a lot with the major revision.
The authors declare no conflicts of interest.
|
Nanocarriers boost non-systemic fluazinam transportation in plants and microbial community enrichment in soil | 809d8b95-aa0c-422f-8b2b-576f697ba2fa | 11783915 | Microbiology[mh] | Oomycetes encompass a diverse group of plant or animal pathogenic microorganisms, posing significant threats to agricultural production, aquaculture, and ecological stability . Among them, Phytophthora capsici is a major soil-borne pathogen, ranking fifth among the top ten plant pathogenic oomycetes . With a broad host range spanning 26 plant families, including Solanaceae and Cucurbitaceae , it poses a severe risk to the yield and quality of crops . Similarly, Pseudomonas cubensis , an obligate parasitic oomycete, can cause cucumber downy mildew and thus result in substantial losses in cucumber production . Chemical control is the primary method for combating oomycete diseases, but the issue of fungicide resistance is escalating. For instance, metalaxyl resistance remains a persistent challenge after over four decades of its application in agriculture . Notably, mutations in the PcORP1 gene have been linked to P. capsici resistance to oxathiapiprolin, a highly efficient novel fungicide for managing oomycete diseases . Fluazinam, a fungicide with a unique mechanism, disrupts the ATP energy production process in pathogenic pathogen cells by uncoupling oxidative phosphorylation . Unlike other oomycete inhibitors, it does not exhibit cross-resistance. Fluazinam has garnered significant attention for plant disease management due to its high efficacy, broad-spectrum, and low resistance risk . In China, it has been registered to control diseases like potato blight, pepper blight, pepper anthracnose, apple brown spot, and Chinese cabbage clubroot . The recommended dosage of fluazinam is 200 to 500 mg L − 1 in the field ( http://www.chinapesticide.org.cn/zwb/dataCenter ), and the maximum residue of fluazinam in pepper is 3.0 mg kg − 1 according to GB 2763–2021 in China. However, its non-systemic nature limits its efficacy, as it can only be absorbed by plants in small amounts and utilized as a protective fungicide, hindering its widespread use. In recent years, nanotechnology has emerged as a promising avenue for pesticide delivery, improving efficacy and reducing environmental impact . Pan et al. developed a dual-responsive nanosystem encompassing Pro@BMMs–PMAA/Fe 3+ nanoparticles (NPs), which enabled the non-systemic fungicide prochloraz to be absorbed by both fungi mycelia and plants . Similarly, Wu et al. developed a temperature-responsive and environmentally friendly nanogel via one-step microemulsion polymerization, giving lambda-cyhalothrin the ability to transport while ensuring the safety of faba bean plants and improving soil microbial community diversity . Nanocapsules containing pesticides offer sustained release and increase stability in complex environments . Yet, to date, there have been no reports on utilizing nanotechnology to enhance the systemic conductivity of fluazinam or to develop its systemic transport in peppers, a crop with high lignification levels. The use of fungicides can alter the physicochemical properties and the microbial community structure of the soil . Bacteria in vegetable-soil ecosystems are crucial for soil nutrient cycling, regulating vegetable growth, and suppressing soil-borne pathogens due to their high microbial abundance and diversity. Fungicide application may directly or indirectly modify soil properties and microbial communities, affecting the stability of the plant-soil ecosystem . However, research on fluazinam’s impact on soil microorganisms is very limited, with some studies suggesting potential toxicity in microcosms and a reduction in Plasmodiophora brassicae abundance in soil . Nanoencapsulated pesticides maintain their integrity for extended periods in complex external environments, which reduces the overall concentration of pesticides in soil thereby minimizing the impact their impact on soil microorganisms . Further investigation is warranted to understand the effects of fluazinam, particularly when nanosized, on soil microbial diversity and composition. In this study, nanocapsules loaded with fluazinam were developed, and their in vitro antimicrobial activity against oomycetes like P. capsici and in vivo efficacy against different oomycete diseases were evaluated. Using fluorescent labeling and HPLC, the systemic and conductive properties of fluazinam in peppers were assessed. Additionally, the mechanism underlying the enhanced antimicrobial activity of nanoencapsulated fluazinam and its impact on soil microorganisms were explored. Chemicals and reagent Fluazinam (purity of 95.8%) was acquired from Shandong Union Pesticide Industry Co., Ltd. Methylene diphenyl diisocyanate (MDI) and epoxy resin (ER, epoxy value N/100 of 0.41–0.47) were procured from Wanhua Chemical Group Co., Ltd. (Shandong, China) and Lanxess Special Chemicals Co., Ltd. (Shanghai, China), respectively. Sodium lignosulphonate (SL, molecular weights: 1.0 × 10 4 –1.2 × 10 4 Da), with a relative molecular weight of 2000 and sulfonation of 3.45 mol kg − 1 , was acquired from MeadWestvaco Inc. (Virginia, USA). Calcium dodecyl benzene sulfonate (pesticide emulsifier 500#) and polyoxyethylene styrylphenyl ether (PSE) were supplied by Zibo Yunchuan Chemicals Co., Ltd. (Shandong, China). Polyethyleneimine and cyclohexanone were obtained from Aladdin Reagent Co., Ltd. (Shanghai, China). Preparation of microcapsules, submicrocapsules, and nanocapsules Fluazinam (1.02 g), MDI (2 g), ER (2 g), 500#, and PSE were dissolved in cyclohexanone to create the organic phase, while SL (5 g) was dissolved in deionized water to form the aqueous phase. The organic phase was gradually added to the aqueous phase to generate an emulsion through homogenization. Subsequently, polyethyleneimine (0.1 g) was added, stirred at 200 rpm/min, and allowed to react at room temperature for three hours to produce pesticides-loaded microcapsule(s) (MCs), submicrocapsule(s) (SubMCs), and nanocapsule(s) (NCs). Additionally, nanoemulsion (NEW) and suspension concentrate (SC) were formulated as controls. Characterization of the microcapsules, submicrocapsules and nanocapsules The particle size distribution of NCs was assessed using a Zetasizer Nano ZS (NanoBrook, 90Plus PALS, Brookhaven, America). A laser particle size analyzer (LS-POP 6, Zhuhai OMEC Instrument Co., Ltd., China) was employed to determine the particle size distribution of MCs, SubMCs, and NCs. The morphology of these capsules was examined using scanning electron microscopy (SEM; Merlin Compact, Zeiss, Germany) and transmission electron microscopy (TEM, Talos F200X G2, FEI, America). The Image J software was utilized to measure the thickness of 30 capsules from MCs, SubMCs, and NCs, followed by calculating the average thickness. Furthermore, a Fourier transform infrared spectrometer (FTIR, Tensor II, Bruker Optics, Germany) was utilized to analyze the infrared spectrum of the materials. Release profile The release profiles of the four formulations (NEW, MCs, SubMCs, and NCs) were examined. Samples were dissolved in 5 mL of deionized water, followed by adding 95 mL of n-hexane, and the mixture was rolled at 90 rpm/min at 25 °C. At various time points, 1 mL of the liquid was extracted from the n-hexane for analysis using high-performance liquid chromatography (Agilent 1290, USA). The sustained release property was assessed by determining the concentration of fluazinam dissolved in n-hexane over time. The cumulative release rate was calculated using the formula: Cumulative release rate = ( C t / C 0 ) × 100%. Where C t represents the concentration of fluazinam dissolved in n-hexane at time t, and C 0 is the total concentration of fluazinam of the prepared samples dissolved in 95 mL of n-hexane. Bioactivity of fluazinam formulations on different Phytophthora strains P. capsici strain BYA5 was obtained from pepper in Anhui, China . P. sojae strain P6497 was provided by Professor Brett Tyler at Oregon State University, United States . P. nicotianae strain was obtained from tobacco in Yunnan, China. The sensitivity of these Phytophthora strains to the NEW, MCs, SubMCs, NCs, SC, and technical material (TC) of fluazinam was assessed using the mycelial growth rate method . Fluazinam TC was dissolved in dimethyl sulfoxide (DMSO) and diluted to 20 mg mL − 1 as a stock solution; for each assay on PDA plate, the stock solution was diluted to working concentrations with sterile deionized water, and the final concentration of DMSO in the PDA agar medium was adjusted to 0.1% (vol / vol). The other formulations were diluted in sterile deionized water to prepare various concentration gradients (ranging from 0.5 mg L − 1 to 20 mg L − 1 ). To create a uniform sample, a hole punch with a 5 mm diameter was employed to extract mycelial plugs from the Phytophthora colony. The mycelial plugs of 5 mm diameter from three days old colony culture of Phytophthora strains were excised and placed face-down in the center of fungicide amended PDA plates. The plates were incubated at 25 °C for 4–7 days and colony diameter was measured using a cross-sectional approach. The experiment was performed with triplicates for each concentration and three plates were subjected as a replicate. PDA medium with the equivalent amount of DMSO or sterile deionized water was served as the blank control. The inhibition was calculated using the formula: Inhibition = [(Control colony diameter – Treated colony diameter) / (Control colony diameter – 5 mm for the mycelial plug)] × 100%. Statistical analysis of the experimental data was performed using the Pesticide Laboratory Biometric Data Processing System (PBT). Regression equations for virulence, inhibition medium concentration, and correlation coefficients of the tested agents were derived from this analysis. Efficacy of fluazinam formulations on pepper Phytophthora blight and downy mildew To assess the effectiveness of fluazinam formulations against pepper Phytophthora blight, six-to-eight-week-old pepper seedlings (cv. Zhongjiao 209) were treated with a solution containing 200 mg L − 1 of five fluazinam formulations (NEW, MCs, SubMCs, NCs, and SC) respectively. Following a 24-hour interval post-spraying, each pot (3 × 3 cm) was inoculated with 3 mL of a zoospore suspension (10 5 zoospore mL − 1 ). For the healthy control plants, plants were treated with sterile deionized water only. At the same time, plants inoculated with an equal amount of zoospore suspension without fungicide treatment were served as the infected control. The experiment was performed with triplicates for each treatment and 16 plants in 8 pots were subjected as a replicate. The disease index was then evaluated 3 to 5 days post-inoculation, following a previously established protocol . The control efficacy was calculated using the formula: Control efficacy = (Disease index of control – Disease index of treatment) / Disease index of control × 100%. To investigate the translocation of the active ingredient of fluazinam from the cucumber hypocotyl and to assess control efficacy against downy mildew of five fluazinam formulations, solutions of NEW, MCs, SubMCs, NCs, and SC (400 mg L − 1 concentration) were sprayed onto cucumber hypocotyls (0.6 mL per plant), respectively. Cucumber plants were grown in a greenhouse at 25 °C, 80% relative humidity, and a 12-hour photoperiod. The plants treated with sterile deionized water were subjected as control. Six hours after fungicide treatment, sporangial suspension (10 5 sporangia mL − 1 ) was inoculated onto the first and second true leaves. The experiment was performed with triplicates for each treatment and 12 plants in 12 pots were subjected as a replicate. Disease severity was assessed after 7 days based on a disease scale from 0 to 9, and the disease index was calculated . The control efficacy was determined using the above formula. ATP production in Phytophthora after being treated with fluazinam ATP levels were assessed using a commercial ATP assay kit (Beyotime, Shanghai, China) following the provided guidelines. ATP content was quantified as nmol g − 1 of protein for mycelium treated with either sterile deionized water or five fluazinam formulations at a concentration of 10 mg L − 1 . Protein concentration was determined using the Bicinchoninic Acid (BCA) reagent kit (ST023, Cowin Biotech Co., Ltd, Beijing, China). This experiment was performed using the P. capsici strain BYA5. Each treatment has three replications. Upward translocation behaviors of five fluazinam formulations Initially, the pepper seeds were soaked in deionized water for 24 h, followed by draining. Thereafter, the seeds were wrapped in moist gauze to enhance their germination. Subsequently, 4–6 week-old seedlings were submerged in opaque plastic tubs (20 cm × 20 cm × 20 cm) containing 400 mL of Hoagland nutrient solution (a blend of calcium, large and medium elements, and various trace elements in a 2:2:1 ratio) that were placed under light exposure for an extended time period. As the pepper plants grew to approximately 30 cm in height, their roots were cleaned and transferred into a diluent solution (each formulation was diluted to 400 mg L − 1 using Hoagland nutrient solution, with a constant quantity maintained at 400 g). They were cultivated at 25 °C with 80% relative humidity and a 12-hour photoperiod for 1, 2, and 5 days. After cultivation, the entire plant was rinsed thoroughly with deionized water. Stem leaves (2.0 g) and roots (1.0 g) were then excised. Each sample was immersed in acetonitrile (MeCN) (2 mL), combined with NaCl (1 g), gently stirred for 5 min, and centrifuged at 4000 rpm/min for 5 min to extract the supernatant (1 mL). The supernatant was transferred to a 2 mL disposable centrifuge tube containing various sorbents: 50 mg of C 18 and 10 mg of graphitized carbon black (GCB) for stem leaves, and 50 mg of C 18 for roots, along with 150 mg of anhydrous MgSO 4 . The concentrations of fluazinam at different plant locations were determined using HPLC with an Agilent TC-C 18 column (4.6 mm × 250 mm, 5 μm; Agilent, USA) . Additionally, to facilitate a more systematic comparison of the uptake effect, the quantified data were analyzed. The uptake of the chemical into roots was conveniently characterized by the root concentration factor (RCF), as calculated below: RCF = (Concentration in roots / Concentration in external solution). Furthermore, for improved visualization of the translocation behavior of various fluazinam formulations, fluorescein isothiocyanate isomer (FITC) was utilized as the tracking agent and incorporated into the fluazinam formulations. Following this, their movement within pepper seedlings was observed using confocal laser scanning microscopy . In a manner akin to the aforementioned procedures, pepper seedlings were exposed to two distinct fluazinam formulations at dilution ratios (600 mg L − 1 ) for 6 days. Subsequently, root slices prepared by a freezing microtome were used for observation . Environmental safety evaluation of five fluazinam formulations To assess the diversity of soil microbial communities, peasant soil (Zhecheng County, Shangqiu City, Henan Province, China) was chosen as the substrate to ensure an appropriate environment for microbial growth. Initially, 5 mL of each of the five fluazinam formulations (200 mg L − 1 ) along with deionized water were added to 200 g of dried peasant soil, followed by the addition of 50 g of deionized water to ensure thorough mixing. Each treatment was replicated three times and incubated at 25 °C without light for 7 days. Throughout this period, the soil moisture content was maintained between 60% and 70% by adding deionized water as needed to ensure adequate humidity. Subsequently, 3 g of soil sample was collected from each group for DNA extraction. The metagenomic sequencing was conducted by Majorbio Bio-pharm Technology Co., Ltd (Shanghai, China). Samples were stored at − 80 °C before the test. Total microbial genomic DNA was extracted from soil samples using an E.Z.N.A ® soil kit (Omega Bio tek, USA). The concentration and purity of obtained DNA samples were assessed using an ultra-micro spectrophotometer (NanoDrop2000, USA) and 1% agarose gel electrophoresis. The hypervariable region V 3 -V 4 of the bacterial 16 S rRNA gene was amplified with primer pairs 338 F (5’-ACTCCTACGGGAGGCAGCAG-3’) and 806R (5’-GGACTACHVGGGTWTCTAAT-3’). Purified amplicons were pooled in equimolar amounts, and paired-end sequenced on an Illumina PE300 platform (Illumina, San Diego, USA) according to the standard protocols by Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). The obtained data was analyzed in the online Majorbio Cloud Platform ( https://www.majorbio.com ). Induction of genes related to plant endocytosis The pepper plants (as described above) were subjected to spraying with 400 mg L − 1 of five fluazinam formulations or deionized water (control), and 12 plants were used in each treatment. The plants were returned to the climate chamber at 25 °C for three hours following treatment, then the leaves of treated pepper plants were homogenized using liquid nitrogen. Approximately 300 mg of pepper leaves were used for total RNA extraction, employing the SV Total RNA Isolation Kit (Promega, Beijing, China), following the manufacturer’s protocol. Plant exocytosis-related genes were selected based on literature reports, and the BLAST tool on the NCBI database ( https://www.ncbi.nlm.nih.gov/ ) was utilized to identify exocytosis-related genes in pepper . Quantitative real-time PCR (qRT-PCR) evaluation of actin The gene expression levels associated with plant endocytosis were subsequently validated via quantitative real-time PCR (qRT-PCR). Total RNA underwent reverse transcription using a first-strand synthesis kit featuring PrimeScript™ RT Master Mix (Perfect Real Time, TaKaRa, Japan) following the manufacturer’s guidelines. qRT-PCR was conducted utilizing a QuantStudioTM 6 Flex Real-Time PCR System (Applied Biosystems, Thermo Fisher Scientific, USA) and Power SYBR ® Green Master Mix (Applied Biosystems). Primer details were provided in Table . Reactions were conducted in triplicate under the following conditions: pre-denaturation (95 °C, 2 min), followed by 40 cycles comprising denaturation (95 °C, 10 s) and annealing (60 °C, 30 s) . Relative quantities (RQ) of products were determined utilizing the 2 −ΔΔCt method . The actin-depolymerizing factor 1 ( Actin ) gene (accession number, LOC107842967) was served as a reference to normalize the quantification of the target gene expression. See Table for primer details. Fluazinam (purity of 95.8%) was acquired from Shandong Union Pesticide Industry Co., Ltd. Methylene diphenyl diisocyanate (MDI) and epoxy resin (ER, epoxy value N/100 of 0.41–0.47) were procured from Wanhua Chemical Group Co., Ltd. (Shandong, China) and Lanxess Special Chemicals Co., Ltd. (Shanghai, China), respectively. Sodium lignosulphonate (SL, molecular weights: 1.0 × 10 4 –1.2 × 10 4 Da), with a relative molecular weight of 2000 and sulfonation of 3.45 mol kg − 1 , was acquired from MeadWestvaco Inc. (Virginia, USA). Calcium dodecyl benzene sulfonate (pesticide emulsifier 500#) and polyoxyethylene styrylphenyl ether (PSE) were supplied by Zibo Yunchuan Chemicals Co., Ltd. (Shandong, China). Polyethyleneimine and cyclohexanone were obtained from Aladdin Reagent Co., Ltd. (Shanghai, China). Fluazinam (1.02 g), MDI (2 g), ER (2 g), 500#, and PSE were dissolved in cyclohexanone to create the organic phase, while SL (5 g) was dissolved in deionized water to form the aqueous phase. The organic phase was gradually added to the aqueous phase to generate an emulsion through homogenization. Subsequently, polyethyleneimine (0.1 g) was added, stirred at 200 rpm/min, and allowed to react at room temperature for three hours to produce pesticides-loaded microcapsule(s) (MCs), submicrocapsule(s) (SubMCs), and nanocapsule(s) (NCs). Additionally, nanoemulsion (NEW) and suspension concentrate (SC) were formulated as controls. The particle size distribution of NCs was assessed using a Zetasizer Nano ZS (NanoBrook, 90Plus PALS, Brookhaven, America). A laser particle size analyzer (LS-POP 6, Zhuhai OMEC Instrument Co., Ltd., China) was employed to determine the particle size distribution of MCs, SubMCs, and NCs. The morphology of these capsules was examined using scanning electron microscopy (SEM; Merlin Compact, Zeiss, Germany) and transmission electron microscopy (TEM, Talos F200X G2, FEI, America). The Image J software was utilized to measure the thickness of 30 capsules from MCs, SubMCs, and NCs, followed by calculating the average thickness. Furthermore, a Fourier transform infrared spectrometer (FTIR, Tensor II, Bruker Optics, Germany) was utilized to analyze the infrared spectrum of the materials. The release profiles of the four formulations (NEW, MCs, SubMCs, and NCs) were examined. Samples were dissolved in 5 mL of deionized water, followed by adding 95 mL of n-hexane, and the mixture was rolled at 90 rpm/min at 25 °C. At various time points, 1 mL of the liquid was extracted from the n-hexane for analysis using high-performance liquid chromatography (Agilent 1290, USA). The sustained release property was assessed by determining the concentration of fluazinam dissolved in n-hexane over time. The cumulative release rate was calculated using the formula: Cumulative release rate = ( C t / C 0 ) × 100%. Where C t represents the concentration of fluazinam dissolved in n-hexane at time t, and C 0 is the total concentration of fluazinam of the prepared samples dissolved in 95 mL of n-hexane. Phytophthora strains P. capsici strain BYA5 was obtained from pepper in Anhui, China . P. sojae strain P6497 was provided by Professor Brett Tyler at Oregon State University, United States . P. nicotianae strain was obtained from tobacco in Yunnan, China. The sensitivity of these Phytophthora strains to the NEW, MCs, SubMCs, NCs, SC, and technical material (TC) of fluazinam was assessed using the mycelial growth rate method . Fluazinam TC was dissolved in dimethyl sulfoxide (DMSO) and diluted to 20 mg mL − 1 as a stock solution; for each assay on PDA plate, the stock solution was diluted to working concentrations with sterile deionized water, and the final concentration of DMSO in the PDA agar medium was adjusted to 0.1% (vol / vol). The other formulations were diluted in sterile deionized water to prepare various concentration gradients (ranging from 0.5 mg L − 1 to 20 mg L − 1 ). To create a uniform sample, a hole punch with a 5 mm diameter was employed to extract mycelial plugs from the Phytophthora colony. The mycelial plugs of 5 mm diameter from three days old colony culture of Phytophthora strains were excised and placed face-down in the center of fungicide amended PDA plates. The plates were incubated at 25 °C for 4–7 days and colony diameter was measured using a cross-sectional approach. The experiment was performed with triplicates for each concentration and three plates were subjected as a replicate. PDA medium with the equivalent amount of DMSO or sterile deionized water was served as the blank control. The inhibition was calculated using the formula: Inhibition = [(Control colony diameter – Treated colony diameter) / (Control colony diameter – 5 mm for the mycelial plug)] × 100%. Statistical analysis of the experimental data was performed using the Pesticide Laboratory Biometric Data Processing System (PBT). Regression equations for virulence, inhibition medium concentration, and correlation coefficients of the tested agents were derived from this analysis. Phytophthora blight and downy mildew To assess the effectiveness of fluazinam formulations against pepper Phytophthora blight, six-to-eight-week-old pepper seedlings (cv. Zhongjiao 209) were treated with a solution containing 200 mg L − 1 of five fluazinam formulations (NEW, MCs, SubMCs, NCs, and SC) respectively. Following a 24-hour interval post-spraying, each pot (3 × 3 cm) was inoculated with 3 mL of a zoospore suspension (10 5 zoospore mL − 1 ). For the healthy control plants, plants were treated with sterile deionized water only. At the same time, plants inoculated with an equal amount of zoospore suspension without fungicide treatment were served as the infected control. The experiment was performed with triplicates for each treatment and 16 plants in 8 pots were subjected as a replicate. The disease index was then evaluated 3 to 5 days post-inoculation, following a previously established protocol . The control efficacy was calculated using the formula: Control efficacy = (Disease index of control – Disease index of treatment) / Disease index of control × 100%. To investigate the translocation of the active ingredient of fluazinam from the cucumber hypocotyl and to assess control efficacy against downy mildew of five fluazinam formulations, solutions of NEW, MCs, SubMCs, NCs, and SC (400 mg L − 1 concentration) were sprayed onto cucumber hypocotyls (0.6 mL per plant), respectively. Cucumber plants were grown in a greenhouse at 25 °C, 80% relative humidity, and a 12-hour photoperiod. The plants treated with sterile deionized water were subjected as control. Six hours after fungicide treatment, sporangial suspension (10 5 sporangia mL − 1 ) was inoculated onto the first and second true leaves. The experiment was performed with triplicates for each treatment and 12 plants in 12 pots were subjected as a replicate. Disease severity was assessed after 7 days based on a disease scale from 0 to 9, and the disease index was calculated . The control efficacy was determined using the above formula. Phytophthora after being treated with fluazinam ATP levels were assessed using a commercial ATP assay kit (Beyotime, Shanghai, China) following the provided guidelines. ATP content was quantified as nmol g − 1 of protein for mycelium treated with either sterile deionized water or five fluazinam formulations at a concentration of 10 mg L − 1 . Protein concentration was determined using the Bicinchoninic Acid (BCA) reagent kit (ST023, Cowin Biotech Co., Ltd, Beijing, China). This experiment was performed using the P. capsici strain BYA5. Each treatment has three replications. Initially, the pepper seeds were soaked in deionized water for 24 h, followed by draining. Thereafter, the seeds were wrapped in moist gauze to enhance their germination. Subsequently, 4–6 week-old seedlings were submerged in opaque plastic tubs (20 cm × 20 cm × 20 cm) containing 400 mL of Hoagland nutrient solution (a blend of calcium, large and medium elements, and various trace elements in a 2:2:1 ratio) that were placed under light exposure for an extended time period. As the pepper plants grew to approximately 30 cm in height, their roots were cleaned and transferred into a diluent solution (each formulation was diluted to 400 mg L − 1 using Hoagland nutrient solution, with a constant quantity maintained at 400 g). They were cultivated at 25 °C with 80% relative humidity and a 12-hour photoperiod for 1, 2, and 5 days. After cultivation, the entire plant was rinsed thoroughly with deionized water. Stem leaves (2.0 g) and roots (1.0 g) were then excised. Each sample was immersed in acetonitrile (MeCN) (2 mL), combined with NaCl (1 g), gently stirred for 5 min, and centrifuged at 4000 rpm/min for 5 min to extract the supernatant (1 mL). The supernatant was transferred to a 2 mL disposable centrifuge tube containing various sorbents: 50 mg of C 18 and 10 mg of graphitized carbon black (GCB) for stem leaves, and 50 mg of C 18 for roots, along with 150 mg of anhydrous MgSO 4 . The concentrations of fluazinam at different plant locations were determined using HPLC with an Agilent TC-C 18 column (4.6 mm × 250 mm, 5 μm; Agilent, USA) . Additionally, to facilitate a more systematic comparison of the uptake effect, the quantified data were analyzed. The uptake of the chemical into roots was conveniently characterized by the root concentration factor (RCF), as calculated below: RCF = (Concentration in roots / Concentration in external solution). Furthermore, for improved visualization of the translocation behavior of various fluazinam formulations, fluorescein isothiocyanate isomer (FITC) was utilized as the tracking agent and incorporated into the fluazinam formulations. Following this, their movement within pepper seedlings was observed using confocal laser scanning microscopy . In a manner akin to the aforementioned procedures, pepper seedlings were exposed to two distinct fluazinam formulations at dilution ratios (600 mg L − 1 ) for 6 days. Subsequently, root slices prepared by a freezing microtome were used for observation . To assess the diversity of soil microbial communities, peasant soil (Zhecheng County, Shangqiu City, Henan Province, China) was chosen as the substrate to ensure an appropriate environment for microbial growth. Initially, 5 mL of each of the five fluazinam formulations (200 mg L − 1 ) along with deionized water were added to 200 g of dried peasant soil, followed by the addition of 50 g of deionized water to ensure thorough mixing. Each treatment was replicated three times and incubated at 25 °C without light for 7 days. Throughout this period, the soil moisture content was maintained between 60% and 70% by adding deionized water as needed to ensure adequate humidity. Subsequently, 3 g of soil sample was collected from each group for DNA extraction. The metagenomic sequencing was conducted by Majorbio Bio-pharm Technology Co., Ltd (Shanghai, China). Samples were stored at − 80 °C before the test. Total microbial genomic DNA was extracted from soil samples using an E.Z.N.A ® soil kit (Omega Bio tek, USA). The concentration and purity of obtained DNA samples were assessed using an ultra-micro spectrophotometer (NanoDrop2000, USA) and 1% agarose gel electrophoresis. The hypervariable region V 3 -V 4 of the bacterial 16 S rRNA gene was amplified with primer pairs 338 F (5’-ACTCCTACGGGAGGCAGCAG-3’) and 806R (5’-GGACTACHVGGGTWTCTAAT-3’). Purified amplicons were pooled in equimolar amounts, and paired-end sequenced on an Illumina PE300 platform (Illumina, San Diego, USA) according to the standard protocols by Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). The obtained data was analyzed in the online Majorbio Cloud Platform ( https://www.majorbio.com ). The pepper plants (as described above) were subjected to spraying with 400 mg L − 1 of five fluazinam formulations or deionized water (control), and 12 plants were used in each treatment. The plants were returned to the climate chamber at 25 °C for three hours following treatment, then the leaves of treated pepper plants were homogenized using liquid nitrogen. Approximately 300 mg of pepper leaves were used for total RNA extraction, employing the SV Total RNA Isolation Kit (Promega, Beijing, China), following the manufacturer’s protocol. Plant exocytosis-related genes were selected based on literature reports, and the BLAST tool on the NCBI database ( https://www.ncbi.nlm.nih.gov/ ) was utilized to identify exocytosis-related genes in pepper . The gene expression levels associated with plant endocytosis were subsequently validated via quantitative real-time PCR (qRT-PCR). Total RNA underwent reverse transcription using a first-strand synthesis kit featuring PrimeScript™ RT Master Mix (Perfect Real Time, TaKaRa, Japan) following the manufacturer’s guidelines. qRT-PCR was conducted utilizing a QuantStudioTM 6 Flex Real-Time PCR System (Applied Biosystems, Thermo Fisher Scientific, USA) and Power SYBR ® Green Master Mix (Applied Biosystems). Primer details were provided in Table . Reactions were conducted in triplicate under the following conditions: pre-denaturation (95 °C, 2 min), followed by 40 cycles comprising denaturation (95 °C, 10 s) and annealing (60 °C, 30 s) . Relative quantities (RQ) of products were determined utilizing the 2 −ΔΔCt method . The actin-depolymerizing factor 1 ( Actin ) gene (accession number, LOC107842967) was served as a reference to normalize the quantification of the target gene expression. See Table for primer details. Characterization of microcapsules Surface morphology and size distribution In this investigation, fluazinam microcapsules were prepared with MDI and ER interpenetrating networks as shell materials through interfacial polymerization using PEI for crosslinking with MDI and ER at the oil-water interface (Fig. a). The D 50 s of MCs, SubMCs, and NCs prepared via interfacial polymerization were 2.36 μm, 0.93 μm, and 193.85 nm, respectively. The D 50 s of SC and NEW were 6.17 μm and 187.03 nm, respectively (Fig. , Additional file). SEM characterized the morphology of pesticide-loaded systems. MCs were spheres with rough surfaces and defined wrinkles (Fig. b). With decreased particle size, SubMCs and NCs were smooth and spherical (Fig. c and d). These capsules had pronounced core-shell structures observed via TEM (Fig. e-g). The average shell thickness of MCs was approximately 240 nm (Fig. h), significantly larger than SubMCs and NCs ( p < 0.05), which were 107 nm and 25 nm, respectively (Fig. i and j). Fourier-transform infrared (FTIR) spectroscopy Fourier-transform infrared (FTIR) spectroscopy identified the alterations in functional groups throughout the reaction. As depicted in Fig. a, for MDI, the asymmetric vibration of the isocyanate group took place at 2250 cm − 1 . Following the reaction with PEI, the characteristic absorption peaks attributed to the substituted urea group and amide carbonyl group emerged at 1660 and 1705 cm − 1 , respectively. For ER, the characteristic absorption peaks of the epoxy ring were observed at 1230, 915, and 830 cm − 1 . After reacting with PEI, the stretching vibration peak of the ether bond appeared at 1130 cm − 1 . In TC, the representative absorption peak of N = O occurred at 1540 cm − 1 . This was observed in the spectra of pesticide-loaded systems, demonstrating that the TC had been encapsulated. The complete reaction between MDI and PEI was identified, with no observation of the characteristic absorption peak of the isocyanate group in the spectra of MCs, SubMCs, and NCs. However, incomplete reaction between ER and PEI was identified, in which a smaller particle size, larger specific surface area, and thinner capsule shell produced a more extensive reaction of epoxy resin and a reduced intensity of the characteristic absorption peak for the epoxy ring (Fig. b). Thermal stability Thermogravimetric analysis was employed to assess the thermal stability of pesticide-loaded microcapsules. Figure c illustrated that TC experienced weight loss onset at 160 °C, then swiftly declined to 4.6% at 300 °C. Incorporating the carrier helped mitigate the rate of weight loss for TC. Notably, thicker shells correlated with reduced weight loss. Consequently, MCs exhibited a significantly slower weight loss rate compared to SubMCs and NCs ( p < 0.05). Release profile As shown in Fig. d, the release profiles of NEW, MCs, SubMCs, and NCs in the release medium were assessed and depicted. The NEW and TC had similar release profiles, and both of them were significantly faster than those of MCs, SubMCs, or NCs, which were possibly attributed to the absence of capsule encapsulation ( p < 0.05). With decreased capsule thickness, the release rate increased. The NCs had notably higher release rates than SubMCs and MCs. By 16 h, the cumulative release rate for NCs reached approximately 90%, while those for SubMCs and MCs were around 70% and 40%, respectively. Other research findings have also shown that a decrease in particle size can accelerate the release rate of active ingredients . In vitro bioactivity of formulations against Phytophthora The biological efficacy of fluazinam in various formulations against Phytophthora was investigated using the growth rate method, with concentrations of fluazinam established at 0, 0.5, 1, 2, 5, and 10 mg L − 1 . EC 50 values for TC, MCs, SubMCs, NCs, NEW, and SC were 1.65, 56.53, 12.45, 1.15, 1.79, and 6.37 mg L − 1 , respectively (Table ). This trend was consistent with those against P. nicotianae and P. sojae (Fig. and Tables and , Additional file). Notably, NCs exhibited a lower EC 50 value compared to MCs, SubMCs, or SC, and were comparable to TC and NEW. Sensitivity tests of NCs, NEW, and SC against fluazinam-resistant P. capsici (RJ-6) showed that the EC 50 value of NCs was 2.4 times higher than that of the sensitive strain, while that of NEW, and SC was 7.2, and 10.2 times higher, respectively. The resistance level of the resistant strain (RJ-6) to fluazinam was reduced by nano-fluazinam (Table , Additional file), which was significantly lower than SC and comparable to NEW ( p < 0.05) . Colony images treated with different formulations on the fourth day confirmed that NCs displayed the most effective biological activity among all the tested formulations (Fig. a). Furthermore, the ATP content of P. capsici treated with sterile deionized water and five fluazinam formulations at 10 mg L − 1 was examined. The ATP content of P. capsici in the MCs, NCs, and SC treatment groups was measured as 0.0276, 0.0007, and 0.0028 nM, respectively (Fig. c), indicating that NCs led to lower ATP content compared to MCs and SC. This result is similar to that of Peng et al., indicating that fluopyram nanoagent can better inhibit ATP synthesis than fluopyram . Overall, fluazinam nanocapsule exhibited superior inhibitory effects against Phytophthora at various concentrations. This improvement in antibacterial activity may be attributed to the nanoscale size of NCs, enabling easier penetration into mycelium for fluazinam delivery . Control efficacy on oomycete diseases of different fluazinam formulations The efficacy of various fluazinam formulations against pepper blight was then assessed. The control efficacies of NCs, NEW, and SC were 72.45%, 63.69%, and 59.41%, respectively (Fig. b and d). Clearly, fluazinam nanocapsules exhibited approximately a 10% increase in control effectiveness compared to NEW and SC. Similarly, the impact of different fluazinam formulations on cucumber downy mildew was investigated. NCs demonstrated effective uptake and translocation to upper leaves from the hypocotyl. After spraying on the first true leaves at the concentration of 400 mg L − 1 , fluazinam nanocapsules, NEW, and SC inhibited 52.02%, 38.67%, and 1.65% of lesions on the second true leaves, respectively (Fig. , Additional file). Compared to NEW, NCs exhibited a 10% increase in control efficacy and significantly outperformed SCs ( p < 0.05). Previous studies have also reported that the nano of non-systemic pesticides could improve their control efficacy , and the possible mechanism for that of fluazinam nanocapsules needs to be further studied. Upward translocation behaviors of different fluazinam formulations The distribution pattern of fluazinam within pepper plants was analyzed using HPLC. Upon examining the fluazinam levels in the stems and leaves (Fig. a), a gradual increase in fluazinam was observed in the NCs group, ranging from 29 to 94 mg L − 1 over 1–5 days. In contrast, SC exhibited a concentration of up to 4 mg L − 1 . Notably, the NCs group consistently maintained significantly higher fluazinam levels compared to other groups during this period ( p < 0.05). Similarly, in pepper roots (Fig. b), fluazinam concentration in the NCs group progressively rose from 153.5 to 495.5 mg L − 1 over 1–5 days, while SC reached 35 mg L − 1 . The root concentration factor (RCF) for NCs on day 5 was 1.6, contrasting with SC’s 0.04. This suggested rapid and extensive accumulation of fluazinam in NCs, leading to higher fluazinam content in leaves than in SC. Consequently, this enhancement improved protective efficacy on cucumber downy mildew and pepper Phytophthora blight. Following this, to validate the absorption of NC nanoparticles by pepper roots, FITC-labelled various fluazinam mixtures were employed to visualize the movement with a confocal laser scanning microscopy (CLSM). For improved comparison, both untreated and treated specimens were examined via CLSM. As depicted in Fig. c, no fluorescent signal was detected in the untreated pepper plants, FITC-labeled NCs were distinctly visible in the root cross-sections under CLSM. Similarly, Pan et al. found nanoparticles can penetrate the roots . Combined with the above results, the reasons for the improved control effect of fluazinam nanocapsules on pepper blight and cucumber downy mildew were analyzed. The particle size of pesticides is an important factor affecting plant absorption . Fluazinam nanocapsules had smaller size than other treatment forms, which could be better absorbed into plants, thus increased the content of fluazinam in plants and improve its control efficacy. The roots absorb nanoparticles to the xylem through the cortex or mesopellar sheath. This process also depends on the nanoparticle size, and surface properties of the nanoparticle . This could be explained by nanoparticles having a higher probability of entering the cell through the cellular space, pores, or other approaches than micron particles due to the scale effects of nanostructures. In addition, the roots could release electrically charged amino acids or organic acids, which could facilitate the adhesion and absorption of nanoparticles to the root system via their surface characteristics . Environmental safety assessment of various fluazinam formulations It is well known that the microbial community in soil affects soil fertility and plant growth. Therefore, statistics on the diversity of the soil microbial community were calculated 12 days after the treatments of various fluazinam formulations. Principal Component Analysis (PCA) based on Amplicon Sequence Variants (ASVs) revealed distinct clustering of samples across six groups (Fig. a). The microbial α-diversity was compared between the NCs group and other groups. No significant difference was found in the Chao1 index and Shannon indices between different groups (Fig. and , Additional file). These results indicated that fluazinam nanocapsules exhibited no greater toxicity than suspension concentrate. To investigate trends in microbial community dynamics, differences in species composition and relative abundance of bacteria at the family level (top 20) were compared between control and fluazinam-treated samples (Fig. b). The Kruskal-Wallis test based on the relative abundance of species, combined with the LEfSe analysis showed that the dominant bacteria in NCs group were Micrococcaceae and Planococcaceae, and their relative abundance was significantly higher than that of other groups ( p < 0.05) (Fig. c and Fig. , Additional file). Micrococcaceae and Planococcaceae are both important for reducing nitrate and degrading organic pollutants in the soil , which suggest that fluazinam nanocapsules could enhance the relative abundance of beneficial bacteria. It has reported that Microbacterium is less enriched after treatment with fluazinam , but in this study, its relative abundance was found to be increased after treatment with fluazinam nanocapsules. This could be explained by the fact that fluazinam nanocapsules could be better absorbed into the plant, reducing the residue of fluazinam in the environment. In addition, nano-fluazinam had a slow-release effect, which could also reduce the impact of fluazinam on soil microorganisms. Moreover, two beneficial bacteria at the genus levels, Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium and Paenisporosarcina , were identified in the NCs group. It has been previously reported that Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium could enhance plant biomass and improve phytoremediation capabilities under environmental stress . Paenisporosarcina is capable of heterotrophic denitrification that promotes the carbon-nitrogen cycle, and some antifreeze proteins containing species could help plants resist cold stress . These results suggest that the increase in the abundance of the two aforementioned bacteria in the NCs group might promote soil health, especially under adverse environmental conditions. Induction of genes associated with plant endocytosis The qRT-PCR analysis determined the relative expression levels of target genes in pepper plants after treatment with various fluazinam formulations. Genes associated with endocytosis and exocytosis processes, such as F-actin-capping ( FACP ) and ras-related RABF2a , were emphatically measured. Compared with other fluazinam formulations, the up-regulated expression of the FACP gene was 2 times in the NCs group (Fig. a), and the up-regulated expression of the RABF2a gene was 1.9 times (Fig. b), which was significantly higher than that in SC group ( p < 0.05). The upregulation of these two exocytosis genes indicated that the exocytosis pathway is activated under the treatment of fluazinam nanocapsules. Previous studies have shown that endocytosis plays a vital role in cell penetration and the subsequent internalization of nanoparticles . It indicated that the enhanced endocytosis allows plant cells to more efficiently internalize fluazinam-mediated nanoparticles, thereby improving the control efficacy. The results are consistent with those reported by Palocci et al., confirming that poly (lactic-co-glycolic) acid nanoparticles (PLGA NPs) are injected into grapevine cells by endocytic vesicles . Surface morphology and size distribution In this investigation, fluazinam microcapsules were prepared with MDI and ER interpenetrating networks as shell materials through interfacial polymerization using PEI for crosslinking with MDI and ER at the oil-water interface (Fig. a). The D 50 s of MCs, SubMCs, and NCs prepared via interfacial polymerization were 2.36 μm, 0.93 μm, and 193.85 nm, respectively. The D 50 s of SC and NEW were 6.17 μm and 187.03 nm, respectively (Fig. , Additional file). SEM characterized the morphology of pesticide-loaded systems. MCs were spheres with rough surfaces and defined wrinkles (Fig. b). With decreased particle size, SubMCs and NCs were smooth and spherical (Fig. c and d). These capsules had pronounced core-shell structures observed via TEM (Fig. e-g). The average shell thickness of MCs was approximately 240 nm (Fig. h), significantly larger than SubMCs and NCs ( p < 0.05), which were 107 nm and 25 nm, respectively (Fig. i and j). Fourier-transform infrared (FTIR) spectroscopy Fourier-transform infrared (FTIR) spectroscopy identified the alterations in functional groups throughout the reaction. As depicted in Fig. a, for MDI, the asymmetric vibration of the isocyanate group took place at 2250 cm − 1 . Following the reaction with PEI, the characteristic absorption peaks attributed to the substituted urea group and amide carbonyl group emerged at 1660 and 1705 cm − 1 , respectively. For ER, the characteristic absorption peaks of the epoxy ring were observed at 1230, 915, and 830 cm − 1 . After reacting with PEI, the stretching vibration peak of the ether bond appeared at 1130 cm − 1 . In TC, the representative absorption peak of N = O occurred at 1540 cm − 1 . This was observed in the spectra of pesticide-loaded systems, demonstrating that the TC had been encapsulated. The complete reaction between MDI and PEI was identified, with no observation of the characteristic absorption peak of the isocyanate group in the spectra of MCs, SubMCs, and NCs. However, incomplete reaction between ER and PEI was identified, in which a smaller particle size, larger specific surface area, and thinner capsule shell produced a more extensive reaction of epoxy resin and a reduced intensity of the characteristic absorption peak for the epoxy ring (Fig. b). Thermal stability Thermogravimetric analysis was employed to assess the thermal stability of pesticide-loaded microcapsules. Figure c illustrated that TC experienced weight loss onset at 160 °C, then swiftly declined to 4.6% at 300 °C. Incorporating the carrier helped mitigate the rate of weight loss for TC. Notably, thicker shells correlated with reduced weight loss. Consequently, MCs exhibited a significantly slower weight loss rate compared to SubMCs and NCs ( p < 0.05). In this investigation, fluazinam microcapsules were prepared with MDI and ER interpenetrating networks as shell materials through interfacial polymerization using PEI for crosslinking with MDI and ER at the oil-water interface (Fig. a). The D 50 s of MCs, SubMCs, and NCs prepared via interfacial polymerization were 2.36 μm, 0.93 μm, and 193.85 nm, respectively. The D 50 s of SC and NEW were 6.17 μm and 187.03 nm, respectively (Fig. , Additional file). SEM characterized the morphology of pesticide-loaded systems. MCs were spheres with rough surfaces and defined wrinkles (Fig. b). With decreased particle size, SubMCs and NCs were smooth and spherical (Fig. c and d). These capsules had pronounced core-shell structures observed via TEM (Fig. e-g). The average shell thickness of MCs was approximately 240 nm (Fig. h), significantly larger than SubMCs and NCs ( p < 0.05), which were 107 nm and 25 nm, respectively (Fig. i and j). Fourier-transform infrared (FTIR) spectroscopy identified the alterations in functional groups throughout the reaction. As depicted in Fig. a, for MDI, the asymmetric vibration of the isocyanate group took place at 2250 cm − 1 . Following the reaction with PEI, the characteristic absorption peaks attributed to the substituted urea group and amide carbonyl group emerged at 1660 and 1705 cm − 1 , respectively. For ER, the characteristic absorption peaks of the epoxy ring were observed at 1230, 915, and 830 cm − 1 . After reacting with PEI, the stretching vibration peak of the ether bond appeared at 1130 cm − 1 . In TC, the representative absorption peak of N = O occurred at 1540 cm − 1 . This was observed in the spectra of pesticide-loaded systems, demonstrating that the TC had been encapsulated. The complete reaction between MDI and PEI was identified, with no observation of the characteristic absorption peak of the isocyanate group in the spectra of MCs, SubMCs, and NCs. However, incomplete reaction between ER and PEI was identified, in which a smaller particle size, larger specific surface area, and thinner capsule shell produced a more extensive reaction of epoxy resin and a reduced intensity of the characteristic absorption peak for the epoxy ring (Fig. b). Thermogravimetric analysis was employed to assess the thermal stability of pesticide-loaded microcapsules. Figure c illustrated that TC experienced weight loss onset at 160 °C, then swiftly declined to 4.6% at 300 °C. Incorporating the carrier helped mitigate the rate of weight loss for TC. Notably, thicker shells correlated with reduced weight loss. Consequently, MCs exhibited a significantly slower weight loss rate compared to SubMCs and NCs ( p < 0.05). As shown in Fig. d, the release profiles of NEW, MCs, SubMCs, and NCs in the release medium were assessed and depicted. The NEW and TC had similar release profiles, and both of them were significantly faster than those of MCs, SubMCs, or NCs, which were possibly attributed to the absence of capsule encapsulation ( p < 0.05). With decreased capsule thickness, the release rate increased. The NCs had notably higher release rates than SubMCs and MCs. By 16 h, the cumulative release rate for NCs reached approximately 90%, while those for SubMCs and MCs were around 70% and 40%, respectively. Other research findings have also shown that a decrease in particle size can accelerate the release rate of active ingredients . Phytophthora The biological efficacy of fluazinam in various formulations against Phytophthora was investigated using the growth rate method, with concentrations of fluazinam established at 0, 0.5, 1, 2, 5, and 10 mg L − 1 . EC 50 values for TC, MCs, SubMCs, NCs, NEW, and SC were 1.65, 56.53, 12.45, 1.15, 1.79, and 6.37 mg L − 1 , respectively (Table ). This trend was consistent with those against P. nicotianae and P. sojae (Fig. and Tables and , Additional file). Notably, NCs exhibited a lower EC 50 value compared to MCs, SubMCs, or SC, and were comparable to TC and NEW. Sensitivity tests of NCs, NEW, and SC against fluazinam-resistant P. capsici (RJ-6) showed that the EC 50 value of NCs was 2.4 times higher than that of the sensitive strain, while that of NEW, and SC was 7.2, and 10.2 times higher, respectively. The resistance level of the resistant strain (RJ-6) to fluazinam was reduced by nano-fluazinam (Table , Additional file), which was significantly lower than SC and comparable to NEW ( p < 0.05) . Colony images treated with different formulations on the fourth day confirmed that NCs displayed the most effective biological activity among all the tested formulations (Fig. a). Furthermore, the ATP content of P. capsici treated with sterile deionized water and five fluazinam formulations at 10 mg L − 1 was examined. The ATP content of P. capsici in the MCs, NCs, and SC treatment groups was measured as 0.0276, 0.0007, and 0.0028 nM, respectively (Fig. c), indicating that NCs led to lower ATP content compared to MCs and SC. This result is similar to that of Peng et al., indicating that fluopyram nanoagent can better inhibit ATP synthesis than fluopyram . Overall, fluazinam nanocapsule exhibited superior inhibitory effects against Phytophthora at various concentrations. This improvement in antibacterial activity may be attributed to the nanoscale size of NCs, enabling easier penetration into mycelium for fluazinam delivery . oomycete diseases of different fluazinam formulations The efficacy of various fluazinam formulations against pepper blight was then assessed. The control efficacies of NCs, NEW, and SC were 72.45%, 63.69%, and 59.41%, respectively (Fig. b and d). Clearly, fluazinam nanocapsules exhibited approximately a 10% increase in control effectiveness compared to NEW and SC. Similarly, the impact of different fluazinam formulations on cucumber downy mildew was investigated. NCs demonstrated effective uptake and translocation to upper leaves from the hypocotyl. After spraying on the first true leaves at the concentration of 400 mg L − 1 , fluazinam nanocapsules, NEW, and SC inhibited 52.02%, 38.67%, and 1.65% of lesions on the second true leaves, respectively (Fig. , Additional file). Compared to NEW, NCs exhibited a 10% increase in control efficacy and significantly outperformed SCs ( p < 0.05). Previous studies have also reported that the nano of non-systemic pesticides could improve their control efficacy , and the possible mechanism for that of fluazinam nanocapsules needs to be further studied. The distribution pattern of fluazinam within pepper plants was analyzed using HPLC. Upon examining the fluazinam levels in the stems and leaves (Fig. a), a gradual increase in fluazinam was observed in the NCs group, ranging from 29 to 94 mg L − 1 over 1–5 days. In contrast, SC exhibited a concentration of up to 4 mg L − 1 . Notably, the NCs group consistently maintained significantly higher fluazinam levels compared to other groups during this period ( p < 0.05). Similarly, in pepper roots (Fig. b), fluazinam concentration in the NCs group progressively rose from 153.5 to 495.5 mg L − 1 over 1–5 days, while SC reached 35 mg L − 1 . The root concentration factor (RCF) for NCs on day 5 was 1.6, contrasting with SC’s 0.04. This suggested rapid and extensive accumulation of fluazinam in NCs, leading to higher fluazinam content in leaves than in SC. Consequently, this enhancement improved protective efficacy on cucumber downy mildew and pepper Phytophthora blight. Following this, to validate the absorption of NC nanoparticles by pepper roots, FITC-labelled various fluazinam mixtures were employed to visualize the movement with a confocal laser scanning microscopy (CLSM). For improved comparison, both untreated and treated specimens were examined via CLSM. As depicted in Fig. c, no fluorescent signal was detected in the untreated pepper plants, FITC-labeled NCs were distinctly visible in the root cross-sections under CLSM. Similarly, Pan et al. found nanoparticles can penetrate the roots . Combined with the above results, the reasons for the improved control effect of fluazinam nanocapsules on pepper blight and cucumber downy mildew were analyzed. The particle size of pesticides is an important factor affecting plant absorption . Fluazinam nanocapsules had smaller size than other treatment forms, which could be better absorbed into plants, thus increased the content of fluazinam in plants and improve its control efficacy. The roots absorb nanoparticles to the xylem through the cortex or mesopellar sheath. This process also depends on the nanoparticle size, and surface properties of the nanoparticle . This could be explained by nanoparticles having a higher probability of entering the cell through the cellular space, pores, or other approaches than micron particles due to the scale effects of nanostructures. In addition, the roots could release electrically charged amino acids or organic acids, which could facilitate the adhesion and absorption of nanoparticles to the root system via their surface characteristics . It is well known that the microbial community in soil affects soil fertility and plant growth. Therefore, statistics on the diversity of the soil microbial community were calculated 12 days after the treatments of various fluazinam formulations. Principal Component Analysis (PCA) based on Amplicon Sequence Variants (ASVs) revealed distinct clustering of samples across six groups (Fig. a). The microbial α-diversity was compared between the NCs group and other groups. No significant difference was found in the Chao1 index and Shannon indices between different groups (Fig. and , Additional file). These results indicated that fluazinam nanocapsules exhibited no greater toxicity than suspension concentrate. To investigate trends in microbial community dynamics, differences in species composition and relative abundance of bacteria at the family level (top 20) were compared between control and fluazinam-treated samples (Fig. b). The Kruskal-Wallis test based on the relative abundance of species, combined with the LEfSe analysis showed that the dominant bacteria in NCs group were Micrococcaceae and Planococcaceae, and their relative abundance was significantly higher than that of other groups ( p < 0.05) (Fig. c and Fig. , Additional file). Micrococcaceae and Planococcaceae are both important for reducing nitrate and degrading organic pollutants in the soil , which suggest that fluazinam nanocapsules could enhance the relative abundance of beneficial bacteria. It has reported that Microbacterium is less enriched after treatment with fluazinam , but in this study, its relative abundance was found to be increased after treatment with fluazinam nanocapsules. This could be explained by the fact that fluazinam nanocapsules could be better absorbed into the plant, reducing the residue of fluazinam in the environment. In addition, nano-fluazinam had a slow-release effect, which could also reduce the impact of fluazinam on soil microorganisms. Moreover, two beneficial bacteria at the genus levels, Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium and Paenisporosarcina , were identified in the NCs group. It has been previously reported that Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium could enhance plant biomass and improve phytoremediation capabilities under environmental stress . Paenisporosarcina is capable of heterotrophic denitrification that promotes the carbon-nitrogen cycle, and some antifreeze proteins containing species could help plants resist cold stress . These results suggest that the increase in the abundance of the two aforementioned bacteria in the NCs group might promote soil health, especially under adverse environmental conditions. The qRT-PCR analysis determined the relative expression levels of target genes in pepper plants after treatment with various fluazinam formulations. Genes associated with endocytosis and exocytosis processes, such as F-actin-capping ( FACP ) and ras-related RABF2a , were emphatically measured. Compared with other fluazinam formulations, the up-regulated expression of the FACP gene was 2 times in the NCs group (Fig. a), and the up-regulated expression of the RABF2a gene was 1.9 times (Fig. b), which was significantly higher than that in SC group ( p < 0.05). The upregulation of these two exocytosis genes indicated that the exocytosis pathway is activated under the treatment of fluazinam nanocapsules. Previous studies have shown that endocytosis plays a vital role in cell penetration and the subsequent internalization of nanoparticles . It indicated that the enhanced endocytosis allows plant cells to more efficiently internalize fluazinam-mediated nanoparticles, thereby improving the control efficacy. The results are consistent with those reported by Palocci et al., confirming that poly (lactic-co-glycolic) acid nanoparticles (PLGA NPs) are injected into grapevine cells by endocytic vesicles . In summary, developing nanocapsule formulations of fluazinam enables non-systemic fluazinam to be transported upward while being environmentally friendly. It allows peppers to absorb fluazinam through their roots and distribute them within the plant. Enhanced endocytosis enables plant cells to more efficiently internalize fluazinam-mediated nanoparticles, thereby enhancing their efficacy against oomycete diseases compared to other treatment forms. Regarding environmental safety, fluazinam nanocapsules exhibit no greater toxicity than suspension concentrate. Meanwhile, the increasd abundance of the Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium and Paenisporosarcina might promote soil health especially under adverse environmental conditions. This study offers a promising strategy to address the application limitations of non-systemic pesticides in soil and explore new applications using nanocarriers. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Perceptions of Endocrine Clinicians Regarding Climate Change and Health | eb34d8c8-6337-4c39-a975-770f7130861c | 11855314 | Physiology[mh] | The threat of climate change and environmental pollutants to health is increasingly recognized in the medical community . In September 2021, more than 200 health journals, in a simultaneous publication, called for emergency action on the climate crisis, stating “as health professionals, we must do all we can to aid the transition to a sustainable, fairer, resilient, and healthier world”, and calling climate change “the greatest threat to global public health” . In June 2022, the American Medical Association announced a policy declaring climate change a public health crisis . There is also growing support for education on climate change, including a global policy report that recommends incorporating climate change education into medical school curricula to help physicians understand the climate emergency and its health impacts . Preventing the effects of climate change on health is also a major motivator for patients. A 2019 ecoAmerica survey found that 66% of Americans believe that if the United States took steps to prevent climate change, it would improve their health and 76% chose “protecting personal and public health” as their top motivation for supporting climate solutions . The survey also indicated that 64% of Americans trust health professionals for information on climate change; however, only 19% of Americans report recently hearing about climate change from health professionals . There is a growing evidence base describing the environmental insults and harms of climate change and human-made pollutants on the endocrine system . Key environmental threats to the endocrine system include endocrine-disrupting chemicals, the effects of air pollution, such as the associations of particulate matter of 2.5 microns or less in diameter (PM 2.5 ) with diabetes incidence and prevalence, cortisol and catecholamine levels, maternal thyroid function tests, as well as the effects of air pollution on vitamin D deficiency . Individuals with diabetes, a main focus of endocrine care, are particularly vulnerable to the effects of climate change . One study found an increase in diabetes incidence with higher temperatures . A Spanish study found that elevated ambient temperatures were associated with an increased prevalence of dysglycemia and insulin resistance in a large cohort of adults, which could only be partially explained by changes in physical activity . Hot weather and heat waves have been associated with increased admissions and emergency room visits among individuals with diabetes . Moreover, a Brazilian study estimated that every 5 °C increase in daily mean temperature was associated with a 6% increase in hospitalization due to diabetes . Diabetic patients are also more prone to dehydration and heatstroke . Studies have also shown associations between air pollution and increased insulin resistance, as well as an increased incidence of diabetes . More acutely, after exposure to fine particulate matter (PM 2.5 ) from a wildfire event, individuals with diabetes were found to have an increased risk of respiratory and cardiovascular physician visits in the period after a wildfire . There is an underlying interplay between sustaining the modern human diet, human health, and associated environmental impacts . Agriculture is responsible for approximately one-third of greenhouse gas emissions, mainly produced by methane from cattle and nitrous oxides from fertilizer, and food systems are a leading cause of land conversion, deforestation, and loss of biodiversity . The overconsumption of unhealthy processed foods and animal-based food consumption is linked to increased rates of obesity, diabetes, cancer, and cardiovascular disease . In addition, the way that food is raised, prepared, processed, and packaged influences exposure to endocrine-disrupting chemicals . Recently, the endocrine community has also started to publicly recognize the threat of climate change to endocrine health. In 2022, the Endocrine Society announced the goal “to increase awareness of the impact of climate change on endocrine health” , and the 2023 Endocrine Society Conference featured plenaries focusing on the health impacts of climate change. Endocrine professionals routinely strive to provide safe and effective care to their patients while providing preventative care aimed at ensuring a healthy future for their patients. Endocrine clinicians have a unique perspective and opportunity to understand the health effects of environmental pollution and climate change, to assume leadership in a preventative and educational role in climate preparedness, and to treat health outcomes related to climate change and environmental pollution. Despite the existing impacts of climate change and environmental threats on human endocrine health, there is little information on the viewpoints of practicing endocrine clinicians regarding this topic. To the best of our knowledge, there is no existing survey assessing the perceptions of endocrine clinicians on climate change and health. The purpose of this study was to evaluate endocrine clinicians’ perceptions of climate change awareness and knowledge, as well as their motivation and barriers to incorporating climate change concepts into practice, and to demonstrate the need for climate change curricula in endocrine training. This study included a 5 min questionnaire for endocrinology clinicians. The survey questions were developed from a review of previously published surveys that assess physicians’ experiences with climate change , selecting questions that focus on the domains of climate change awareness and knowledge, as well as motivation for action. Questions addressing demographics, endocrine-specific topics, and perceptions of climate health curricula in endocrine training were added. The survey contained 18 questions , primarily consisting of Likert-scale questions, along with some multiple-choice questions and 1 free-text response question at the end of the survey, where respondents were given the opportunity to provide an anecdote about their experiences or comment freely on the subject. The final questionnaire was pilot tested with the endocrine clinicians in our clinic for completion time, readability, and overall flow of questions but was not otherwise validated. Data were collected and managed using REDCap electronic data capture tools hosted at the University of Vermont . This study was approved as exempt research by the University of Vermont Committees on Human Research (STUDY00002229). Eligible participants included self-identified endocrinology clinicians (e.g., MD, DO, diabetes educators, nurse practitioners, and physician assistants). Non-medical participants and participants who do not work in an endocrine practice were excluded. A link to the Redcap survey was sent to members of the endocrine community through multiple methods, including social media (Facebook Endocrinologist group, WhatsApp Endocrine fellow group, Twitter, DocMatter Endocrine Society page), and an email was sent to all endocrine fellowship program directors within the United States. We shared the link via the listed methods 2 times in hopes of maximizing recruitment. The link opened an information sheet describing the purpose of this study and its procedures. Participants were told that they would be asked a series of questions about their endocrine practice and their perceptions of climate change. Participants were also told that this was a one-time, de-identified questionnaire. The information sheet concluded with a yes or no question about whether the participant would like to proceed with the study. Those who indicated they would like to proceed were directed to the actual questionnaire. We closed the survey when no new surveys arrived. Study data were collected between September 2022 and November 2022. Data were analyzed using descriptive and univariate statistics. Continuous data were analyzed using Wilcoxon rank sum tests, and Fisher’s exact test or chi-square analyses were used for categorical data. Analyses were conducted using STATA 16.1 (Stata Corporation, College Station, TX, USA), with p < 0.05 required for statistical significance. A total of 164 self-identified endocrinology clinicians completed the online questionnaire . A total of 64% of participants identified as female, and 98% of respondents were physicians; among these, 31% were program directors, and 29% were endocrine fellows. The median age was 41 years (mean 44 years), and 91% reported being employed in the United States. The majority indicated both outpatient and inpatient settings as their primary work environment (58%). The majority of respondents (95%) reported that climate change is happening, and 52% were very worried about climate change. Female clinicians were significantly more worried about climate change than male clinicians ( p = 0.02). Responses were variable regarding knowledge about climate change and health (7% very, 40% moderately, 35% modestly, and 18% not at all) and concerns about the effects of climate change on patient health (13% a great deal, 36% a moderate amount, 26% only a little, 8% not at all, 17% don’t know). The top three endocrine climate–health concerns identified were reduced exercise due to motorized transport, malnutrition resulting from food prices, and disruptions to healthcare services during weather events . The majority of respondents reported motivation to take action in their personal or professional lives regarding climate change (69% strongly agree or agree). Free-text comments reflected a variety of opinions, with selected examples presented in . There were significant differences in responses based on age. Compared to older clinicians, younger clinicians (aged less than 44 years) were significantly more concerned about global climate change affecting patients in terms of anxiety, depression, or other mental health conditions ( p = 0.003); increased poverty due to economic hardship and resulting health problems ( p = 0.001); disease incidence and severity related to exposure to particulate matter from air pollution ( p = 0.02); disruptions to health care services for people with chronic conditions during extreme weather events ( p = 0.009); the effects of increased meat consumption on patient health ( p = 0.04); and the environmental effects of medical waste ( p = 0.02). Younger clinicians were also significantly more motivated to take action in their personal or professional lives ( p = 0.006). Responses were divided on whether clinicians have a responsibility to bring the health effects of climate change to their patients’ attention (12% strongly agree, 37% agree, 40% neutral, 6% disagree, 5% strongly disagree). The majority have rarely (38%) or never (45%) discussed climate change with their patients. The three most highly ranked barriers to addressing climate change-related health topics with patients included lack of time (66%), lack of knowledge on how to approach the issue with patients (48%), and the clinician’s lack of knowledge on the subject (45%). The three resources perceived as most helpful were continuing medical education (CME), patient education materials, and policy statements. The majority of respondents agreed that teaching about climate change and health impacts should be integrated into medical education (73% strongly agree or agree), and 83% of the endocrine program directors and fellows indicated that their program does not cover this topic. Both program directors and fellows agreed, without a significant difference ( p = 0.13), that teaching about climate change and its association with health impacts should be integrated into medical education. We found that the majority of endocrine clinicians who responded to the survey were aware of and worried about climate change, with variable degrees of knowledge about the topic. The top three endocrine climate–health concerns were reduced exercise due to motorized transport, malnutrition linked to food prices, and disruptions to healthcare services during weather events. Most respondents agreed that teaching about climate change and its health effects should be integrated into medical education, with consensus among program directors and fellows; however, according to our survey results, the majority of endocrine fellowship programs do not teach about climate change and its associations with health. We noted that the effects of increased meat consumption on the health of patients, as well as the climate effects from farming animals related to high rates of meat consumption, did not rank among the top three endocrine climate–health concerns. We had expected this to be a top concern given the important relationships between diet and endocrine health conditions, such as diabetes, as well as the effects of agriculture on planetary health. Furthermore, it has been recognized that a shift to healthy and sustainable plant-forward diets would have significant benefits in both reducing greenhouse gas emissions and improving health outcomes . Endocrine clinicians already frequently counsel patients on healthier diets and physical activity. There is also an opportunity to engage patients in these activities from a more sustainable perspective. On an individual level, adopting a plant-based diet can save 0.8 tonnes of CO 2 -equivalent emissions per year, which is considered a high-impact action that substantially reduces personal emissions . It has been found that individuals’ willingness to eat less meat increases with its perceived effectiveness , which hints at the importance of increasing education and awareness on these topics. In addition, encouraging exercise or the use of public transport over individual motorized transport can have substantial effects on both the environment and individual health. Increased public transport usage is associated with a decreased prevalence of obesity . Commuting by bicycle or walking can decrease greenhouse gas emissions, and in turn, bicycle commuters experience an estimated 50% reduction in all-cause mortality and cardiovascular disease . Reduced exercise due to motorized transport was a top concern among endocrine clinicians who responded to the survey. We noted some significant differences in responses according to age group, with younger clinicians reporting both more concerns and more motivation to take action regarding climate change. This may be aligned with a generation gap in climate change beliefs . However, this contrasts with some studies that report positive relationships between age and pro-environmental behaviors. We do not have sufficient detail in the questionnaire to explain why the differences in responses by age exist. A qualitative or mixed methods study would likely be required to explore these findings. These results should be explored in future research, as interventions to address climate change may vary by age group. The finding that 5% of surveyed endocrinology clinicians do not accept that climate change is happening is concerning. Future research should further explore the perceptions and characteristics of clinicians who do not believe climate change is occurring. We would also like to highlight that the gap in healthcare education regarding the effects of climate change on health also affects the endocrine community. The low percentage of respondents (7%) who feel “very knowledgeable” about the connection between climate change and health is concerning. This gap in understanding can hinder effective advocacy and action. Our survey revealed a self-reported knowledge gap, with 17.7% of respondents indicating that they feel not at all knowledgeable about climate change and its health impacts. We also found that the majority of respondents, many of whom are program directors or fellows, agreed that teaching about climate change and health impacts should be integrated into medical education, and 83% of the program directors and fellows indicated that their programs do not cover this topic. As concisely stated in a Lancet comment on healthcare education, “It is time for a global planetary health education revolution to equip the health sector to treat the Code Red Emergency we face” . Without intentionally creating educational opportunities on planetary health topics, we are missing opportunities to equip endocrine clinicians with the knowledge base and tools needed to address this threat as we continue to encounter the effects of climate change. Climate change and health topics should be incorporated into endocrine fellowship curricula and CME activities. Responses were divided on whether clinicians have a responsibility to bring the health effects of climate change to the attention of their patients. Similar to a multinational survey on the views of health professionals on climate change and health , a lack of time was the most common barrier identified. Other important barriers included a lack of knowledge on how to approach the issue with patients and the clinician’s own lack of knowledge on the subject. This survey serves as a starting point for understanding the perspective of endocrine clinicians on climate change and health and for identifying gaps in knowledge, awareness, and self-efficacy. There is evidence suggesting that informing people about the health effects of climate change, as well as solutions to address them, can increase support for actions to reduce emissions . There were limitations. This was a non-sponsored survey, which limited the sample size, as there were no incentives for completion. Our data allowed us to identify some factors affecting concerns about climate change and health among endocrine clinicians, such as age and gender. A larger study with more data may allow for the exploration of other interesting associations between responses, such as the effects of geographical location or education level. It is possible that the clinicians who responded were more concerned about climate change than those who did not complete the survey. There may also be selection bias, given that not all endocrine clinicians use social media, and younger individuals may be more likely to use it. The questionnaire included questions from prior published surveys primarily focused on the themes of awareness and knowledge, along with additional questions of interest. We acknowledge that our final survey did not undergo the full validation process, other than pilot testing to ensure the clarity and functionality of the survey interface. The endocrine topics incorporated into the survey are documented endocrine-related health concerns found in the literature. Undoubtedly, there are more endocrine health concerns related to climate change or environmental health that have not been included in the survey. We also did not specifically mention important issues surrounding climate change that exacerbate health and social inequities, such as the unequal exposure of racial or socioeconomic groups to air pollution or power imbalances in the food system. This survey does not provide evidence that the specific health concerns described are directly climate- or endocrine-related. Despite its limitations, this survey brings together documented climate and endocrine-related health effects and the opinions of current practicing clinicians and adds to the body of evidence regarding climate change perceptions. Ultimately, clinicians should aim to deliver sustainable services that augment wellbeing and reduce health inequalities. Endocrine clinicians, as health care workers, are trusted voices within the community and have the opportunity to encourage healthy collective behaviors for sustainable living and even have the potential to be strong advocates for sustainable healthcare systems and climate action. Examples of smaller actions that can still have a meaningful impact include encouraging patients to take nature walks, encouraging sustainable eating, prescribing re-usable insulin pens, reducing unnecessary bloodwork, integrating telemedicine into practice, discussing climate issues with colleagues, and developing emergency action plans for diabetes patients during heat waves or natural disasters. The majority of the endocrine clinicians surveyed were aware of and worried about climate change, with varying levels of knowledge and concern about climate change and its health effects. Most respondents agreed that teaching about climate change and health effects should be integrated into medical education, with similar responses among program directors and fellows. In addition, most reported being motivated to take action in some way for climate change. The results also reflect an untapped interest in developing a curriculum focused on climate change and endocrine health within fellowship programs and CME. |
Feelings from the Heart Part II: Simulation and Validation of Static and Dynamic HRV Decrease-Trigger Algorithms to Detect Stress in Firefighters | 757db5ea-be84-4713-9a9c-820c3f9ce752 | 9029799 | Physiology[mh] | The number of sensors implemented in mobile ECG devices has remarkably increased in recent years. Most available mobile ECG devices nowadays have several sensors on board and interact with smartphones. State of the art ECG devices additionally assess parameters such as movements by means of accelerometers, the sea level by means of pressure sensors, and even temperature and electrodermal activity . This combination of sensors allows for a complex evaluation of physiological functioning of the autonomic nervous system and its interaction with the central nervous system by assessing people’s ECG and taking corresponding movements and energy expenditure into account. However, most research uses sensor data offline, and online approaches processing data in real-time are largely missing. Specifically, an interactive psychophysiological assessment needs (simple) online algorithms, which can identify episodes of transient bodily changes potentially signaling psychosocially relevant states in daily life. For example, Ebner-Priemer et al. developed a functional algorithm to detect episodes of intensified physical activity to trigger the assessment of wellbeing . These authors only used accelerometer information for their algorithm; however, nowadays researchers strive to develop online and real-time systems to identify (conscious and subconscious) psychosocial states associated with increased vulnerability and stress by using combined information of ECG and accelerometers . This approach mainly grounds on the concept of additional heart rate and additional heart rate variability reduction (AddHRVr; ), which assumes that metabolically independent HRV decreases may result from cognitive and emotional factors . AddHRVr should allow conclusions about individual psychosocial states and should therefore indicate transitions from situations with lower stress to situations with elevated stress. The AddHRVr algorithm assumes that transient HRV reductions do not only indicate metabolic needs of the organism, but are also sensitive for the complex interplay between the autonomic and central nervous system . Specifically, the vagus nerve as the primary parasympathetic nerve and major constituent of HRV ensures a rapid communication between the brain and the heart (~200 ms) with afferent fibers (from the heart to the brain) outweighing efferent fibers (from the brain to the heart). Hence, vagally-mediated HRV could signal cognitive function, emotion regulation, and states of vulnerability and stress . This is in accordance with several prominent theories, which account for the salient role of HRV for psychosocial functioning (e.g., theory of neurovisceral integration, ; polyvagal theory, ; vagal tank theory, ). Specifically, the root mean square of successive differences (RMSSD) and the high frequency (HF) component of the heartbeat are indicators of HRV and primarily reflect vagal function . These measures seem to be especially sensitive to higher central nervous system function and thus could be of special importance for psychosocial functioning . Taken together, analyzing HRV by taking the metabolic needs into account may inform about the psychosocial functioning of an organism and may indicate transitions of stress in an ever-changing environment. However, how can we arrive at a specific AddHRVr algorithm that is sensitive to psychosocially meaningful situations in daily life, thus enabling for an interactive psychophysiological assessment? Schwerdtfeger and Rominger presented a twostep simulation approach to derive AddHRVr algorithms adjustments that could be used to develop online algorithms for interactive psychophysiological assessments to identify periods of vulnerability and stress in everyday life . In a first step, the authors assessed the individual association between HRV and bodily movement . This is realized by regression analyses of the continuously recorded vagally-mediated HRV (RMSSD) on a minute-by-minute basis and the corresponding bodily movements (and associated energy expenditure). Based on this linear regression information, individual reductions of RMSSD can be estimated independent of metabolic demands. Following the algorithm, a meaningful RMSSD decrease takes place when the deviation of the momentary HRV from the predicted HRV (based on the energy expenditure) reaches a pre-defined RMSSD threshold (i.e., 0.5 SDs of the RMSSD during calibration period). The algorithm then delivers a binary trigger, indicating an AddHRVr, whenever a predefined number of meaningful RMSSD decreases (i.e., RMSSD threshold) are observed within a predefined time (i.e., RMSSD window; ). This individual trigger distribution simulates the online algorithm functioning and allows one to identify potentially meaningful situations. In the second step, this information can be applied to bootstrapped multilevel analyses to evaluate if a specific algorithm setting is associated with specific affective states, resilience, or stress levels, among others. Algorithms with sufficient power, acceptable effect sizes, and a feasible number of delivered triggers would be considered for future applications in online studies. Based on this approach, Schwerdtfeger and Rominger showed that, in principle, a specific setting of an AddHRVr algorithm can be specified to index specific psychosocial states (i.e., low quality of social interactions). However, previous validation approaches were based on subjective ratings of psychological states randomly assessed during an ecological momentary assessment (EMA; ) and—to the authors’ knowledge—there is no simulation study available focusing on more objective measures of stress. Therefore, we used an already published data set , which includes 38 male firefighters who each wore a mobile ECG device for 24 h, which recorded ECG and movement associated energy expenditure. Furthermore, the operations of the firefighters were classified into three increasing levels of objective stress (i.e., routine work at the fire station, routine operations, emergency operations). The timing of these operations was based on the official operating times of the primary control unit (for a similar procedure, see, e.g., ). This allowed us to estimate (1) when a transition of objective stressfulness occurred, and (2) whether it was an increase (e.g., from routine operations to emergency operations) or a decrease of objective stressfulness (e.g., from emergency operations to routine work at the fire station). Furthermore, since all available studies on the AddHRVr algorithm focused on static algorithms, we additionally simulated a dynamic algorithm in this study . In contrast to a static AddHRVr algorithm, a dynamic algorithm is specifically designed to adapt to participants’ HRV deviations, which might further increase the sensitivity to detect AddHRVr due to transitions of objective stress. Therefore, this simulation study examined if a specific setting of an algorithm (static or dynamic) could identify an increase of objective stressfulness, while leaving decreases of stress largely undetected. We were further interested whether a dynamic algorithm might outperform a static algorithm or not. Hence, this study aims to provide further evidence for the validity of algorithms to detect meaningful stress-related decreases of HRV independently from metabolic demands.
2.1. Participants An already published data set of Schwerdtfeger and Dick was used to simulate the algorithm settings . In total, 38 male firefighters took part in this study. The mean age of the participants was M = 32.71 years ( SD = 6.90). An EMA was conducted to collect data throughout 24 h (for details see ). The study was approved by the ethics committee, and informed consent was obtained from all participants. 2.2. Material and Instrument 2.2.1. EMA At each random and self-paced prompt, the firefighters rated their perceived stress with two items (‘I feel stressed’, ‘I feel burdened’). Furthermore, state negative affect was assessed with five items from the positive and negative affect schedule (PANAS, ) with the following items: ‘I am upset’, ‘I feel distressed’, ‘I feel agitated’, ‘I feel tense’, ‘I am nervous’ (for more details see ). In total, 571 valid prompts were available, which took place during one of the three objective situations of increasing stress. Between-person (R kR ) and within-person (R C ) reliability was good for both measures (stress: R kR = 0.87, R C = 0.80; negative affect: R kR = 0.94, R C = 0.71). 2.2.2. Objective Changes of Stress: From Routine Work at the Fire Station to More Stressful Emergency Operations Work episodes were continuously coded (in 1-min steps) as either covering routine work at the fire station (non-stressful, 81.4% of the 24 h), low-stressful routine operations (11.2% of the 24 h), or high-stressful emergency operations (7.4% of the 24 h). There were 199 operations in total ( M = 5.24 per participant) of which 40% were coded as highly stressful (see for more details). For the present study, we calculated the moment when a change in objective stressfulness took place, and the firefighters had a routine or emergency operation. The timing of the three levels of stress was based on the official operating times of the primary control unit. Based on this continuous information (1-min steps), we identified moments when an increase of objective stressfulness was observed (i.e., a change from non-stressful to low-stressful, from non-stressful to high-stressful, and from low-stressful to high-stressful operations). Furthermore, we identified the moments when a decrease of objective stressfulness took place. We only considered objective changes in stressfulness when these situations lasted at least 20 min. Moments of objective stress increases were coded with 1 ( n = 182), and moments of decreases of objective stressfulness were coded with 0 ( n = 170; e.g., from high-stressful to low-stressful operations). The mean time between changes of stress was 116.18 min ( SD = 142.34 min) with a minimum of 20 min and a maximum of 909 min. Based on this information, we were able to calculate if AddHRVr triggers were associated with moments of increases or decreases of objective stress. We expected that especially increases in objective stress should go along with AddHRVr triggers, and a decrease should go along with the absence of AddHRVr triggers. Therefore, we expected a positive relationship at the second step of analyses. 2.2.3. Physiological Ambulatory Monitoring of ECG and Movement ECG and bodily movement were recorded with the physiological ambulatory monitoring device EcgMove3 (movisens GmbH, Karlsruhe, Germany) throughout one weekday (24 h). The ECG signal was sampled with 12 bit-resolution and stored with 1024 Hz. Bodily movement was recorded with 64 Hz via a 3D acceleration sampling. In combination with an integrated pressure sensor, activity energy expenditure (AEE) in kcal was calculated. 2.3. Data Preprocessing The EcgMove3 device delivers information of several variables including HRV, movement, and AEE in real time. The device calculates relevant variables (e.g., RMSSD) in adjacent 1-min segments, which could be used for the online application of an algorithm. We used the stored live RMSSD data of the device for the simulation of the algorithm function. These stored online values are automatically scanned for artifacts by the movisens EcgMove3 device during recording. We used the established time domain measure RMSSD (ms) to assess HRV and AEE (kcal) to assess metabolic changes due to movement. 2.4. Simulation of a Dynamic and a Static Algorithm for Detecting AddHRVr In this work, we applied the two-step approach of simulating and developing an algorithm to work in online mode presented by Schwerdtfeger and Rominger . In step 1, the AddHRVr algorithms were simulated at the individual level. By simulating various algorithm adjustments separately for a static and a dynamic algorithm, it can be determined when an algorithm would have detected meaningful HRV decreases and delivered triggers within the 24 h of recording. In step 2, these triggers were used to predict the objective increase of stress. By running bootstrapped multi-level analyses per algorithm setting (500 iterations each), predicting the increase of stress based on the association with a trigger (within 20 min after the objective change of stress), the power and the mean odds associated with a specific algorithm setting were calculated . 2.4.1. Step 1: Simulation of Individual AddHRVr Triggers for Each Firefighter As outlined by Schwerdtfeger and Rominger, the association between AEE and HRV differs between persons . Therefore, a linear regression analysis predicting each firefighter’s RMSSD (ms) by AEE (kacl) was calculated at the first step. This is necessary for the calibration of the algorithm and to account for metabolic demands . Individual linear regressions were based on the total 24 h of recorded data . The resulting scatter plots and linear regression lines were visually inspected for each participant to indicate if outliers were present. Few 1-min segments were automatically deleted before calculating regression analyses ( M = 0.16, SD = 0.50, max = 2; for further methodological details see ). The individual linear regression parameters (i.e., intercept and slope) were then used to simulate the algorithms and calculate meaningful RMSSD decreases (see ). For the static algorithm, the continuous 1-min AEE scores were used to calculate the expected RMSSD (due to the regression function), which was compared with the corresponding and actual RMSSD of this very minute. If the deviation between actual RMSSD and predicted/expected RMSSD was higher than a predefined threshold (i.e., 0.5 × SD of RMSSD calibration ; see ), this 1-min segment was classified as a meaningful RMSSD decrease. Since the dynamic algorithm should account for (psychologically relevant and) dynamic changes of HRV during the day and therefore should adapt to different HRV levels, we applied a moving average procedure to the continuously recorded HRV signal. The mean HRV (RMSSD) of a 60-min buffer serves as the dynamic intercept to predict the expected RMSSD of each single minute (see ). The content of this buffer changes in 1-min steps, which allows a continuous algorithm adjustment for each minute. The buffer is filled with the corresponding HRV value (of the very minute) if the observed mean AEE of the last 40 min is lower than the average AEE during the calibration. If the observed mean AEE of the last 40 min was higher than the average AEE during calibration, the 60-min buffer is filled with the intercept derived from the linear regression analysis (i.e., HRV without metabolic demands; for the decision tree see ). This replacement of HRV values is necessary, since HRV values accompanied with high AEE will most likely be influenced by movement and corresponding metabolic demands and might therefore not adequately indicate the intended (psychologically relevant and) dynamic HRV changes during the day. The algorithm starts with an HRV buffer with the intercept as average and an AEE buffer with the mean AEE during calibration as average. According to the algorithm, a 1-min segment classified as a meaningful RMSSD decrease is not sufficient to provoke an AddHRVr trigger. As illustrated in , three further parameters are implemented in the algorithms: (1) the RMSSD window length (number of 1-min segments included), (2) the RMSSD window threshold (the number of 1-min segments, which have to be classified as a meaningful RMSSD decreases in order to provoke an AddHRVr trigger), and (3) the silent setting. Specifically, if within a predefined period of 5 min (i.e., window length), 4 segments are classified as significant decreases (i.e., RMSSD window threshold), an AddHRVr trigger will be provoked (e.g., 4 out of 5). Following an AddHRVr trigger, the algorithm will remain silent for a predefined time (i.e., silent setting, e.g., 20 min), which prevents the algorithm to trigger further prompts. Importantly, the change of these parameters significantly alters the characteristic of the algorithm (for a detailed exploration of a static algorithm, see ). For example, an algorithm which fires when 4 out of 5 segments are classified as meaningful HRV decreases detects predominantly shorter-lived effects as compared to an algorithm with a 7 out of 10 or even a 13 out of 30 setting. Hence, different algorithms are associated with different alarm-rates and might differ in their psychosocial meaningfulness. For reasons of parsimony, we followed Schwerdtfeger and Rominger and mainly focused on the window length and window threshold and kept the silent setting of 20 min constant . We calculated the resulting trigger information (coded as 0 = absent and 1 = present) at the individual level for all combinations of RMSSD window lengths starting from 2 to 30 and RMSSD window thresholds from 1 to 29 (i.e., 1 out of 2 until 29 out of 30; i.e., 435 different algorithm adjustments). These 435 different trigger distributions were the input for the multi-level simulation at step 2. 2.4.2. Step 2: Simulation of the AddHRVr Triggers to Predict Objective Changes of Stressfulness Similar to former procedures , the predictive value of an AddHRVr trigger relative to an increase of objective stress was determined via calculating the associations of an AddHRVr trigger within the transition of objective stress. Thus, we aimed to evaluate the sensitivity of various AddHRVr algorithms by comparing the associations of AddHRVr triggers with the objective change of stressfulness (i.e., increase vs. decrease of stress). A reliable association between transitions of objective stress and AddHRVr triggered prompts would suggest psychophysiological sensitivity of the algorithm settings. Statistical evaluation was accomplished via the lme4 package (linear mixed effects modeling ) in R (version 4.0.4 ) using the glmer function (generalized linear mixed-effects models). Specifically, within 20 min after an objective change of stressfulness, the prevalence of an AddHRVr trigger was determined. The triggers identified (coded as 0 = absent and 1 = present) were subjected to a multilevel model predicting increases of objective stressfulness. In total, 435 different combinations of trigger settings were analyzed (i.e., RMSSD window length, RMSSD window threshold) with a silent setting of 20 min. These 435 multilevel models were bootstrapped with 500 iterations each. For each iteration, data of 38 participants were sampled with replacement. We estimated statistical power, effect sizes (i.e., odds), confidence intervals, and the mean number of triggered increases and decreases of all combinations of the algorithm’s settings. Statistical power was calculated by dividing the number of iterations with a p < 0.05 by the total number of (valid) iterations (hence, the ratio between significant effects and total iterations). Based on this information, 3-dimensional hyperplanes were generated in R (plotly package ) to visualize the properties (i.e., power) of the different algorithm settings (i.e., window length and threshold). In accordance with Schwerdtfeger and Rominger, an algorithm setting with high power, solid effect size (confidence intervals), and a reasonable number of AddHRVr triggers should be favored for an online validation study .
An already published data set of Schwerdtfeger and Dick was used to simulate the algorithm settings . In total, 38 male firefighters took part in this study. The mean age of the participants was M = 32.71 years ( SD = 6.90). An EMA was conducted to collect data throughout 24 h (for details see ). The study was approved by the ethics committee, and informed consent was obtained from all participants.
2.2.1. EMA At each random and self-paced prompt, the firefighters rated their perceived stress with two items (‘I feel stressed’, ‘I feel burdened’). Furthermore, state negative affect was assessed with five items from the positive and negative affect schedule (PANAS, ) with the following items: ‘I am upset’, ‘I feel distressed’, ‘I feel agitated’, ‘I feel tense’, ‘I am nervous’ (for more details see ). In total, 571 valid prompts were available, which took place during one of the three objective situations of increasing stress. Between-person (R kR ) and within-person (R C ) reliability was good for both measures (stress: R kR = 0.87, R C = 0.80; negative affect: R kR = 0.94, R C = 0.71). 2.2.2. Objective Changes of Stress: From Routine Work at the Fire Station to More Stressful Emergency Operations Work episodes were continuously coded (in 1-min steps) as either covering routine work at the fire station (non-stressful, 81.4% of the 24 h), low-stressful routine operations (11.2% of the 24 h), or high-stressful emergency operations (7.4% of the 24 h). There were 199 operations in total ( M = 5.24 per participant) of which 40% were coded as highly stressful (see for more details). For the present study, we calculated the moment when a change in objective stressfulness took place, and the firefighters had a routine or emergency operation. The timing of the three levels of stress was based on the official operating times of the primary control unit. Based on this continuous information (1-min steps), we identified moments when an increase of objective stressfulness was observed (i.e., a change from non-stressful to low-stressful, from non-stressful to high-stressful, and from low-stressful to high-stressful operations). Furthermore, we identified the moments when a decrease of objective stressfulness took place. We only considered objective changes in stressfulness when these situations lasted at least 20 min. Moments of objective stress increases were coded with 1 ( n = 182), and moments of decreases of objective stressfulness were coded with 0 ( n = 170; e.g., from high-stressful to low-stressful operations). The mean time between changes of stress was 116.18 min ( SD = 142.34 min) with a minimum of 20 min and a maximum of 909 min. Based on this information, we were able to calculate if AddHRVr triggers were associated with moments of increases or decreases of objective stress. We expected that especially increases in objective stress should go along with AddHRVr triggers, and a decrease should go along with the absence of AddHRVr triggers. Therefore, we expected a positive relationship at the second step of analyses. 2.2.3. Physiological Ambulatory Monitoring of ECG and Movement ECG and bodily movement were recorded with the physiological ambulatory monitoring device EcgMove3 (movisens GmbH, Karlsruhe, Germany) throughout one weekday (24 h). The ECG signal was sampled with 12 bit-resolution and stored with 1024 Hz. Bodily movement was recorded with 64 Hz via a 3D acceleration sampling. In combination with an integrated pressure sensor, activity energy expenditure (AEE) in kcal was calculated.
At each random and self-paced prompt, the firefighters rated their perceived stress with two items (‘I feel stressed’, ‘I feel burdened’). Furthermore, state negative affect was assessed with five items from the positive and negative affect schedule (PANAS, ) with the following items: ‘I am upset’, ‘I feel distressed’, ‘I feel agitated’, ‘I feel tense’, ‘I am nervous’ (for more details see ). In total, 571 valid prompts were available, which took place during one of the three objective situations of increasing stress. Between-person (R kR ) and within-person (R C ) reliability was good for both measures (stress: R kR = 0.87, R C = 0.80; negative affect: R kR = 0.94, R C = 0.71).
Work episodes were continuously coded (in 1-min steps) as either covering routine work at the fire station (non-stressful, 81.4% of the 24 h), low-stressful routine operations (11.2% of the 24 h), or high-stressful emergency operations (7.4% of the 24 h). There were 199 operations in total ( M = 5.24 per participant) of which 40% were coded as highly stressful (see for more details). For the present study, we calculated the moment when a change in objective stressfulness took place, and the firefighters had a routine or emergency operation. The timing of the three levels of stress was based on the official operating times of the primary control unit. Based on this continuous information (1-min steps), we identified moments when an increase of objective stressfulness was observed (i.e., a change from non-stressful to low-stressful, from non-stressful to high-stressful, and from low-stressful to high-stressful operations). Furthermore, we identified the moments when a decrease of objective stressfulness took place. We only considered objective changes in stressfulness when these situations lasted at least 20 min. Moments of objective stress increases were coded with 1 ( n = 182), and moments of decreases of objective stressfulness were coded with 0 ( n = 170; e.g., from high-stressful to low-stressful operations). The mean time between changes of stress was 116.18 min ( SD = 142.34 min) with a minimum of 20 min and a maximum of 909 min. Based on this information, we were able to calculate if AddHRVr triggers were associated with moments of increases or decreases of objective stress. We expected that especially increases in objective stress should go along with AddHRVr triggers, and a decrease should go along with the absence of AddHRVr triggers. Therefore, we expected a positive relationship at the second step of analyses.
ECG and bodily movement were recorded with the physiological ambulatory monitoring device EcgMove3 (movisens GmbH, Karlsruhe, Germany) throughout one weekday (24 h). The ECG signal was sampled with 12 bit-resolution and stored with 1024 Hz. Bodily movement was recorded with 64 Hz via a 3D acceleration sampling. In combination with an integrated pressure sensor, activity energy expenditure (AEE) in kcal was calculated.
The EcgMove3 device delivers information of several variables including HRV, movement, and AEE in real time. The device calculates relevant variables (e.g., RMSSD) in adjacent 1-min segments, which could be used for the online application of an algorithm. We used the stored live RMSSD data of the device for the simulation of the algorithm function. These stored online values are automatically scanned for artifacts by the movisens EcgMove3 device during recording. We used the established time domain measure RMSSD (ms) to assess HRV and AEE (kcal) to assess metabolic changes due to movement.
In this work, we applied the two-step approach of simulating and developing an algorithm to work in online mode presented by Schwerdtfeger and Rominger . In step 1, the AddHRVr algorithms were simulated at the individual level. By simulating various algorithm adjustments separately for a static and a dynamic algorithm, it can be determined when an algorithm would have detected meaningful HRV decreases and delivered triggers within the 24 h of recording. In step 2, these triggers were used to predict the objective increase of stress. By running bootstrapped multi-level analyses per algorithm setting (500 iterations each), predicting the increase of stress based on the association with a trigger (within 20 min after the objective change of stress), the power and the mean odds associated with a specific algorithm setting were calculated . 2.4.1. Step 1: Simulation of Individual AddHRVr Triggers for Each Firefighter As outlined by Schwerdtfeger and Rominger, the association between AEE and HRV differs between persons . Therefore, a linear regression analysis predicting each firefighter’s RMSSD (ms) by AEE (kacl) was calculated at the first step. This is necessary for the calibration of the algorithm and to account for metabolic demands . Individual linear regressions were based on the total 24 h of recorded data . The resulting scatter plots and linear regression lines were visually inspected for each participant to indicate if outliers were present. Few 1-min segments were automatically deleted before calculating regression analyses ( M = 0.16, SD = 0.50, max = 2; for further methodological details see ). The individual linear regression parameters (i.e., intercept and slope) were then used to simulate the algorithms and calculate meaningful RMSSD decreases (see ). For the static algorithm, the continuous 1-min AEE scores were used to calculate the expected RMSSD (due to the regression function), which was compared with the corresponding and actual RMSSD of this very minute. If the deviation between actual RMSSD and predicted/expected RMSSD was higher than a predefined threshold (i.e., 0.5 × SD of RMSSD calibration ; see ), this 1-min segment was classified as a meaningful RMSSD decrease. Since the dynamic algorithm should account for (psychologically relevant and) dynamic changes of HRV during the day and therefore should adapt to different HRV levels, we applied a moving average procedure to the continuously recorded HRV signal. The mean HRV (RMSSD) of a 60-min buffer serves as the dynamic intercept to predict the expected RMSSD of each single minute (see ). The content of this buffer changes in 1-min steps, which allows a continuous algorithm adjustment for each minute. The buffer is filled with the corresponding HRV value (of the very minute) if the observed mean AEE of the last 40 min is lower than the average AEE during the calibration. If the observed mean AEE of the last 40 min was higher than the average AEE during calibration, the 60-min buffer is filled with the intercept derived from the linear regression analysis (i.e., HRV without metabolic demands; for the decision tree see ). This replacement of HRV values is necessary, since HRV values accompanied with high AEE will most likely be influenced by movement and corresponding metabolic demands and might therefore not adequately indicate the intended (psychologically relevant and) dynamic HRV changes during the day. The algorithm starts with an HRV buffer with the intercept as average and an AEE buffer with the mean AEE during calibration as average. According to the algorithm, a 1-min segment classified as a meaningful RMSSD decrease is not sufficient to provoke an AddHRVr trigger. As illustrated in , three further parameters are implemented in the algorithms: (1) the RMSSD window length (number of 1-min segments included), (2) the RMSSD window threshold (the number of 1-min segments, which have to be classified as a meaningful RMSSD decreases in order to provoke an AddHRVr trigger), and (3) the silent setting. Specifically, if within a predefined period of 5 min (i.e., window length), 4 segments are classified as significant decreases (i.e., RMSSD window threshold), an AddHRVr trigger will be provoked (e.g., 4 out of 5). Following an AddHRVr trigger, the algorithm will remain silent for a predefined time (i.e., silent setting, e.g., 20 min), which prevents the algorithm to trigger further prompts. Importantly, the change of these parameters significantly alters the characteristic of the algorithm (for a detailed exploration of a static algorithm, see ). For example, an algorithm which fires when 4 out of 5 segments are classified as meaningful HRV decreases detects predominantly shorter-lived effects as compared to an algorithm with a 7 out of 10 or even a 13 out of 30 setting. Hence, different algorithms are associated with different alarm-rates and might differ in their psychosocial meaningfulness. For reasons of parsimony, we followed Schwerdtfeger and Rominger and mainly focused on the window length and window threshold and kept the silent setting of 20 min constant . We calculated the resulting trigger information (coded as 0 = absent and 1 = present) at the individual level for all combinations of RMSSD window lengths starting from 2 to 30 and RMSSD window thresholds from 1 to 29 (i.e., 1 out of 2 until 29 out of 30; i.e., 435 different algorithm adjustments). These 435 different trigger distributions were the input for the multi-level simulation at step 2. 2.4.2. Step 2: Simulation of the AddHRVr Triggers to Predict Objective Changes of Stressfulness Similar to former procedures , the predictive value of an AddHRVr trigger relative to an increase of objective stress was determined via calculating the associations of an AddHRVr trigger within the transition of objective stress. Thus, we aimed to evaluate the sensitivity of various AddHRVr algorithms by comparing the associations of AddHRVr triggers with the objective change of stressfulness (i.e., increase vs. decrease of stress). A reliable association between transitions of objective stress and AddHRVr triggered prompts would suggest psychophysiological sensitivity of the algorithm settings. Statistical evaluation was accomplished via the lme4 package (linear mixed effects modeling ) in R (version 4.0.4 ) using the glmer function (generalized linear mixed-effects models). Specifically, within 20 min after an objective change of stressfulness, the prevalence of an AddHRVr trigger was determined. The triggers identified (coded as 0 = absent and 1 = present) were subjected to a multilevel model predicting increases of objective stressfulness. In total, 435 different combinations of trigger settings were analyzed (i.e., RMSSD window length, RMSSD window threshold) with a silent setting of 20 min. These 435 multilevel models were bootstrapped with 500 iterations each. For each iteration, data of 38 participants were sampled with replacement. We estimated statistical power, effect sizes (i.e., odds), confidence intervals, and the mean number of triggered increases and decreases of all combinations of the algorithm’s settings. Statistical power was calculated by dividing the number of iterations with a p < 0.05 by the total number of (valid) iterations (hence, the ratio between significant effects and total iterations). Based on this information, 3-dimensional hyperplanes were generated in R (plotly package ) to visualize the properties (i.e., power) of the different algorithm settings (i.e., window length and threshold). In accordance with Schwerdtfeger and Rominger, an algorithm setting with high power, solid effect size (confidence intervals), and a reasonable number of AddHRVr triggers should be favored for an online validation study .
As outlined by Schwerdtfeger and Rominger, the association between AEE and HRV differs between persons . Therefore, a linear regression analysis predicting each firefighter’s RMSSD (ms) by AEE (kacl) was calculated at the first step. This is necessary for the calibration of the algorithm and to account for metabolic demands . Individual linear regressions were based on the total 24 h of recorded data . The resulting scatter plots and linear regression lines were visually inspected for each participant to indicate if outliers were present. Few 1-min segments were automatically deleted before calculating regression analyses ( M = 0.16, SD = 0.50, max = 2; for further methodological details see ). The individual linear regression parameters (i.e., intercept and slope) were then used to simulate the algorithms and calculate meaningful RMSSD decreases (see ). For the static algorithm, the continuous 1-min AEE scores were used to calculate the expected RMSSD (due to the regression function), which was compared with the corresponding and actual RMSSD of this very minute. If the deviation between actual RMSSD and predicted/expected RMSSD was higher than a predefined threshold (i.e., 0.5 × SD of RMSSD calibration ; see ), this 1-min segment was classified as a meaningful RMSSD decrease. Since the dynamic algorithm should account for (psychologically relevant and) dynamic changes of HRV during the day and therefore should adapt to different HRV levels, we applied a moving average procedure to the continuously recorded HRV signal. The mean HRV (RMSSD) of a 60-min buffer serves as the dynamic intercept to predict the expected RMSSD of each single minute (see ). The content of this buffer changes in 1-min steps, which allows a continuous algorithm adjustment for each minute. The buffer is filled with the corresponding HRV value (of the very minute) if the observed mean AEE of the last 40 min is lower than the average AEE during the calibration. If the observed mean AEE of the last 40 min was higher than the average AEE during calibration, the 60-min buffer is filled with the intercept derived from the linear regression analysis (i.e., HRV without metabolic demands; for the decision tree see ). This replacement of HRV values is necessary, since HRV values accompanied with high AEE will most likely be influenced by movement and corresponding metabolic demands and might therefore not adequately indicate the intended (psychologically relevant and) dynamic HRV changes during the day. The algorithm starts with an HRV buffer with the intercept as average and an AEE buffer with the mean AEE during calibration as average. According to the algorithm, a 1-min segment classified as a meaningful RMSSD decrease is not sufficient to provoke an AddHRVr trigger. As illustrated in , three further parameters are implemented in the algorithms: (1) the RMSSD window length (number of 1-min segments included), (2) the RMSSD window threshold (the number of 1-min segments, which have to be classified as a meaningful RMSSD decreases in order to provoke an AddHRVr trigger), and (3) the silent setting. Specifically, if within a predefined period of 5 min (i.e., window length), 4 segments are classified as significant decreases (i.e., RMSSD window threshold), an AddHRVr trigger will be provoked (e.g., 4 out of 5). Following an AddHRVr trigger, the algorithm will remain silent for a predefined time (i.e., silent setting, e.g., 20 min), which prevents the algorithm to trigger further prompts. Importantly, the change of these parameters significantly alters the characteristic of the algorithm (for a detailed exploration of a static algorithm, see ). For example, an algorithm which fires when 4 out of 5 segments are classified as meaningful HRV decreases detects predominantly shorter-lived effects as compared to an algorithm with a 7 out of 10 or even a 13 out of 30 setting. Hence, different algorithms are associated with different alarm-rates and might differ in their psychosocial meaningfulness. For reasons of parsimony, we followed Schwerdtfeger and Rominger and mainly focused on the window length and window threshold and kept the silent setting of 20 min constant . We calculated the resulting trigger information (coded as 0 = absent and 1 = present) at the individual level for all combinations of RMSSD window lengths starting from 2 to 30 and RMSSD window thresholds from 1 to 29 (i.e., 1 out of 2 until 29 out of 30; i.e., 435 different algorithm adjustments). These 435 different trigger distributions were the input for the multi-level simulation at step 2.
Similar to former procedures , the predictive value of an AddHRVr trigger relative to an increase of objective stress was determined via calculating the associations of an AddHRVr trigger within the transition of objective stress. Thus, we aimed to evaluate the sensitivity of various AddHRVr algorithms by comparing the associations of AddHRVr triggers with the objective change of stressfulness (i.e., increase vs. decrease of stress). A reliable association between transitions of objective stress and AddHRVr triggered prompts would suggest psychophysiological sensitivity of the algorithm settings. Statistical evaluation was accomplished via the lme4 package (linear mixed effects modeling ) in R (version 4.0.4 ) using the glmer function (generalized linear mixed-effects models). Specifically, within 20 min after an objective change of stressfulness, the prevalence of an AddHRVr trigger was determined. The triggers identified (coded as 0 = absent and 1 = present) were subjected to a multilevel model predicting increases of objective stressfulness. In total, 435 different combinations of trigger settings were analyzed (i.e., RMSSD window length, RMSSD window threshold) with a silent setting of 20 min. These 435 multilevel models were bootstrapped with 500 iterations each. For each iteration, data of 38 participants were sampled with replacement. We estimated statistical power, effect sizes (i.e., odds), confidence intervals, and the mean number of triggered increases and decreases of all combinations of the algorithm’s settings. Statistical power was calculated by dividing the number of iterations with a p < 0.05 by the total number of (valid) iterations (hence, the ratio between significant effects and total iterations). Based on this information, 3-dimensional hyperplanes were generated in R (plotly package ) to visualize the properties (i.e., power) of the different algorithm settings (i.e., window length and threshold). In accordance with Schwerdtfeger and Rominger, an algorithm setting with high power, solid effect size (confidence intervals), and a reasonable number of AddHRVr triggers should be favored for an online validation study .
3.1. Perceived Stress, Negative Affect, and HRV (RMSSD) during the Three Different Levels of Objective Stress (Routine Work vs. Routine Operations vs. Emergency Operations) In order to provide evidence for validity of the objective levels of stress, we calculated three random intercept models with the objective level of stress as fixed effect predicting perceived stress, negative affect, and HRV (mean RMSSD 10 min before each prompt). These three analyses indicated increased stress and negative affect during routine operations (vs. routine work at the fire station; see ) and during emergency operations (vs. routine work at the fire station). In accordance with this, HRV (RMSSD) showed decreases in these situations, which were independent from changes in AEE. As an important prerequisite to simulate AddHRVr algorithms to detect objective changes in stress, this pattern of findings provides evidence for the validity of the objective classification of stressfulness in firefighters. 3.2. Simulation of Static and Dynamic AddHRV Algorithms 3.2.1. Step 1: AddHRVr Algorithm Simulation on an Individual Level presents the descriptive statistics of the resulting individually adjusted parameters of the static and dynamic algorithms by means of a linear regression approach. All parameters showed high interindividual variation. Based on this information, the distribution of static and dynamic AddHRVr triggers can be simulated individually. Panel A of shows the AddHRVr triggers for a dynamic algorithm setting and panel B for a static algorithm (both with 4 out of 6). The number as well as the temporal distribution of triggers (green asterisks) substantially differed between the static and the dynamic algorithm. This difference of delivered triggers can be explained by the intended properties of the dynamic algorithm, which adapts to changes of participant’s HRV levels. These adaptations result in a dynamic change of the estimated threshold (predicted RMSSD—0.5 × SD RMSSD calibration ; bold blue line in ), which allows one to detect meaningful decreases of HRV even if the level of HRV increased. Furthermore, as illustrated in , the dynamic algorithm was associated with a lower total number of delivered triggers in contrast to the static algorithm when the silent setting was set to 10 min ( t (434) = 27.82, p < 0.001), 20 min ( t (434) = 19.60, p < 0.001), 30 min ( t (434) = 11.98, p < 0.001), and 40 min ( t (434) = 6.11, p < 0.001), but was not significantly different with a silent setting of 50 min ( t (434) = 0.14, p = 0.892). The dynamic algorithm was associated with a higher total number of delivered triggers, when the silent setting was 60 min ( t (434) = −5.71, p < 0.001). For a silent setting of 20 min, which was applied in the present simulation, the mean total number of delivered static triggers per setting was M = 22.14 ( SD = 15.09) and for the dynamic algorithm M = 21.31 ( SD = 15.73). 3.2.2. Step 2: Simulation of Algorithm Settings to Detect Objective Transitions of Stress In order to derive the most sensitive algorithm setting for predicting an increase of stress, all 435 bootstrap simulations were inspected for the highest power separately for the static and the dynamic algorithm (i.e., a total of 870 bootstrapped simulations; A; see for an interactive 3D illustration of the dynamic algorithm). The highest power of 0.680 was observed for the algorithm setting with 7 out of 10 (silent setting of 20 min). and shows the adjustments with similar power scores for the dynamic and the static algorithm. Effect estimates are the percentage change in odds of being an increase of objective stress (i.e., odds ratio−1) × 100). This means that when within a time window of 20 min after a transition of stress, a trigger was delivered, this increased the odds of being an increase of objective stress by, e.g., 99% in the case of the algorithm setting with 7 out of 10 (see ). Although the power scores were not significantly different between the dynamic and the static AddHRVr algorithm ( t (427) = 0.36, p = 0.722), the observed estimated effects for the dynamic algorithm were significantly more positive as compared to the effects of the static algorithm ( t (427) = 6.09, p < 0.001). When additionally taking the specific algorithm adjustments into account, it could be concluded that the dynamic AddHRVr algorithm predicted increases of objective stress more sensitively compared to the static algorithm ( and ). Specifically, if within a time window of 20 min after a transition of stress, a trigger was delivered, this increased the odds of being an increase of objective stress by 99% at an algorithm setting of 7 out of 10. The total number of delivered triggers in this setting was 578, thus indicating that each participant would have received about 15.21 triggers within the 24 h of recording in case an interactive psychophysiological ambulatory assessment would have been conducted with these settings. For the static algorithm, a setting 13 out of 30 showed the highest power of 0.624 (see and ). However, the estimated effect size was negative, thus suggesting that within a time window of 20 min after an objective transition of stress, a delivered trigger would decrease the odds of being an increase of stress by 44%, and the algorithm triggered more decreases of stress (i.e., 50.30) compared to increases (i.e., 35.27; see ). Since the achieved power of the dynamic algorithm did not reach the 0.70 threshold, we further simulated how many participants should be sampled in an online study to reach a sufficient power with the suggested setting of 7 out of 10 (silent setting 20 min). As depicted in , the simulation reached a robust power of above 0.70 with a samples size of N = 41, the 0.80 threshold with N = 56 participants, and a power of 0.90 with N = 79 participants.
In order to provide evidence for validity of the objective levels of stress, we calculated three random intercept models with the objective level of stress as fixed effect predicting perceived stress, negative affect, and HRV (mean RMSSD 10 min before each prompt). These three analyses indicated increased stress and negative affect during routine operations (vs. routine work at the fire station; see ) and during emergency operations (vs. routine work at the fire station). In accordance with this, HRV (RMSSD) showed decreases in these situations, which were independent from changes in AEE. As an important prerequisite to simulate AddHRVr algorithms to detect objective changes in stress, this pattern of findings provides evidence for the validity of the objective classification of stressfulness in firefighters.
3.2.1. Step 1: AddHRVr Algorithm Simulation on an Individual Level presents the descriptive statistics of the resulting individually adjusted parameters of the static and dynamic algorithms by means of a linear regression approach. All parameters showed high interindividual variation. Based on this information, the distribution of static and dynamic AddHRVr triggers can be simulated individually. Panel A of shows the AddHRVr triggers for a dynamic algorithm setting and panel B for a static algorithm (both with 4 out of 6). The number as well as the temporal distribution of triggers (green asterisks) substantially differed between the static and the dynamic algorithm. This difference of delivered triggers can be explained by the intended properties of the dynamic algorithm, which adapts to changes of participant’s HRV levels. These adaptations result in a dynamic change of the estimated threshold (predicted RMSSD—0.5 × SD RMSSD calibration ; bold blue line in ), which allows one to detect meaningful decreases of HRV even if the level of HRV increased. Furthermore, as illustrated in , the dynamic algorithm was associated with a lower total number of delivered triggers in contrast to the static algorithm when the silent setting was set to 10 min ( t (434) = 27.82, p < 0.001), 20 min ( t (434) = 19.60, p < 0.001), 30 min ( t (434) = 11.98, p < 0.001), and 40 min ( t (434) = 6.11, p < 0.001), but was not significantly different with a silent setting of 50 min ( t (434) = 0.14, p = 0.892). The dynamic algorithm was associated with a higher total number of delivered triggers, when the silent setting was 60 min ( t (434) = −5.71, p < 0.001). For a silent setting of 20 min, which was applied in the present simulation, the mean total number of delivered static triggers per setting was M = 22.14 ( SD = 15.09) and for the dynamic algorithm M = 21.31 ( SD = 15.73). 3.2.2. Step 2: Simulation of Algorithm Settings to Detect Objective Transitions of Stress In order to derive the most sensitive algorithm setting for predicting an increase of stress, all 435 bootstrap simulations were inspected for the highest power separately for the static and the dynamic algorithm (i.e., a total of 870 bootstrapped simulations; A; see for an interactive 3D illustration of the dynamic algorithm). The highest power of 0.680 was observed for the algorithm setting with 7 out of 10 (silent setting of 20 min). and shows the adjustments with similar power scores for the dynamic and the static algorithm. Effect estimates are the percentage change in odds of being an increase of objective stress (i.e., odds ratio−1) × 100). This means that when within a time window of 20 min after a transition of stress, a trigger was delivered, this increased the odds of being an increase of objective stress by, e.g., 99% in the case of the algorithm setting with 7 out of 10 (see ). Although the power scores were not significantly different between the dynamic and the static AddHRVr algorithm ( t (427) = 0.36, p = 0.722), the observed estimated effects for the dynamic algorithm were significantly more positive as compared to the effects of the static algorithm ( t (427) = 6.09, p < 0.001). When additionally taking the specific algorithm adjustments into account, it could be concluded that the dynamic AddHRVr algorithm predicted increases of objective stress more sensitively compared to the static algorithm ( and ). Specifically, if within a time window of 20 min after a transition of stress, a trigger was delivered, this increased the odds of being an increase of objective stress by 99% at an algorithm setting of 7 out of 10. The total number of delivered triggers in this setting was 578, thus indicating that each participant would have received about 15.21 triggers within the 24 h of recording in case an interactive psychophysiological ambulatory assessment would have been conducted with these settings. For the static algorithm, a setting 13 out of 30 showed the highest power of 0.624 (see and ). However, the estimated effect size was negative, thus suggesting that within a time window of 20 min after an objective transition of stress, a delivered trigger would decrease the odds of being an increase of stress by 44%, and the algorithm triggered more decreases of stress (i.e., 50.30) compared to increases (i.e., 35.27; see ). Since the achieved power of the dynamic algorithm did not reach the 0.70 threshold, we further simulated how many participants should be sampled in an online study to reach a sufficient power with the suggested setting of 7 out of 10 (silent setting 20 min). As depicted in , the simulation reached a robust power of above 0.70 with a samples size of N = 41, the 0.80 threshold with N = 56 participants, and a power of 0.90 with N = 79 participants.
presents the descriptive statistics of the resulting individually adjusted parameters of the static and dynamic algorithms by means of a linear regression approach. All parameters showed high interindividual variation. Based on this information, the distribution of static and dynamic AddHRVr triggers can be simulated individually. Panel A of shows the AddHRVr triggers for a dynamic algorithm setting and panel B for a static algorithm (both with 4 out of 6). The number as well as the temporal distribution of triggers (green asterisks) substantially differed between the static and the dynamic algorithm. This difference of delivered triggers can be explained by the intended properties of the dynamic algorithm, which adapts to changes of participant’s HRV levels. These adaptations result in a dynamic change of the estimated threshold (predicted RMSSD—0.5 × SD RMSSD calibration ; bold blue line in ), which allows one to detect meaningful decreases of HRV even if the level of HRV increased. Furthermore, as illustrated in , the dynamic algorithm was associated with a lower total number of delivered triggers in contrast to the static algorithm when the silent setting was set to 10 min ( t (434) = 27.82, p < 0.001), 20 min ( t (434) = 19.60, p < 0.001), 30 min ( t (434) = 11.98, p < 0.001), and 40 min ( t (434) = 6.11, p < 0.001), but was not significantly different with a silent setting of 50 min ( t (434) = 0.14, p = 0.892). The dynamic algorithm was associated with a higher total number of delivered triggers, when the silent setting was 60 min ( t (434) = −5.71, p < 0.001). For a silent setting of 20 min, which was applied in the present simulation, the mean total number of delivered static triggers per setting was M = 22.14 ( SD = 15.09) and for the dynamic algorithm M = 21.31 ( SD = 15.73).
In order to derive the most sensitive algorithm setting for predicting an increase of stress, all 435 bootstrap simulations were inspected for the highest power separately for the static and the dynamic algorithm (i.e., a total of 870 bootstrapped simulations; A; see for an interactive 3D illustration of the dynamic algorithm). The highest power of 0.680 was observed for the algorithm setting with 7 out of 10 (silent setting of 20 min). and shows the adjustments with similar power scores for the dynamic and the static algorithm. Effect estimates are the percentage change in odds of being an increase of objective stress (i.e., odds ratio−1) × 100). This means that when within a time window of 20 min after a transition of stress, a trigger was delivered, this increased the odds of being an increase of objective stress by, e.g., 99% in the case of the algorithm setting with 7 out of 10 (see ). Although the power scores were not significantly different between the dynamic and the static AddHRVr algorithm ( t (427) = 0.36, p = 0.722), the observed estimated effects for the dynamic algorithm were significantly more positive as compared to the effects of the static algorithm ( t (427) = 6.09, p < 0.001). When additionally taking the specific algorithm adjustments into account, it could be concluded that the dynamic AddHRVr algorithm predicted increases of objective stress more sensitively compared to the static algorithm ( and ). Specifically, if within a time window of 20 min after a transition of stress, a trigger was delivered, this increased the odds of being an increase of objective stress by 99% at an algorithm setting of 7 out of 10. The total number of delivered triggers in this setting was 578, thus indicating that each participant would have received about 15.21 triggers within the 24 h of recording in case an interactive psychophysiological ambulatory assessment would have been conducted with these settings. For the static algorithm, a setting 13 out of 30 showed the highest power of 0.624 (see and ). However, the estimated effect size was negative, thus suggesting that within a time window of 20 min after an objective transition of stress, a delivered trigger would decrease the odds of being an increase of stress by 44%, and the algorithm triggered more decreases of stress (i.e., 50.30) compared to increases (i.e., 35.27; see ). Since the achieved power of the dynamic algorithm did not reach the 0.70 threshold, we further simulated how many participants should be sampled in an online study to reach a sufficient power with the suggested setting of 7 out of 10 (silent setting 20 min). As depicted in , the simulation reached a robust power of above 0.70 with a samples size of N = 41, the 0.80 threshold with N = 56 participants, and a power of 0.90 with N = 79 participants.
The aim of this study was to demonstrate a simulation approach to derive the settings of a static and a dynamic AddHRVr algorithm to index increases of stress. We were specifically interested to show that this simulation approach can be applied to objective indicators of transitions of stress in firefighters, indicating the validity of an AddHRVr algorithm. By simulating algorithm settings along several dimensions, separately for a static and a dynamic algorithm, we arrived at an algorithm specification of 7 out of 10 min-segments with AddHRVr exceeding an individually predefined threshold of predicted RMSSD for a dynamic algorithm. Importantly, this study applied a procedure that could be useful to derive sensitive settings for a psychosocially meaningful AddHRVr algorithm. While previous research was mainly concerned with static algorithms and focused on subjective psychological states , we applied an explorative approach to determine which algorithm settings are particularly sensitive to objective transitions of stressfulness in firefighters, which in turn are associated with decreased HRV (RMSSD) as well as increased perceived stress and subjectively rated negative affect . It should be noted though that the derived settings in this study at step 2 could differ in other populations and particularly for other psychosocial concepts (e.g., worry, rumination, anger, or fear). However, the individual parameters derived at step 1 of the present study are similar to parameters calculated by Schwerdtfeger and Rominger, although they predominantly investigated young students . Specifically, the observed RMSSD and the linear correlation between bodily movement and RMSSD are largely comparable. This finding indicates some robustness of the linear regression approach for individual algorithm adjustments based on physiological ambulatory assessment of several hours during everyday life . Nonetheless, it seems mandatory to validate the findings of step 2 in subsequent research and to analyze the specificity of the algorithm settings (for further details on the validation of derived algorithms see ). Finally, online application of the derived algorithm settings is the gold standard of validation. However, a simulation approach is essential to come up with algorithm settings that would work online, since the potential settings of an algorithm are infinite. Doing this in the field by applying different settings in online studies in different samples would certainly not be feasible. To systematically apply different settings of RMSSD window length and the RMSSD threshold in online studies, we would have needed 435 different samples with 38 firefighters each. This would have resulted in a total sample size of 15,530 firefighters. For a systematic evaluation in a within-subjects design, we would have needed the observation of 38 firefighters for 435 days with changing settings each 24 h. Additionally, the sampling needs would be further multiplied if researchers are interested in the impact of variations of the silent settings or are interested in comparing static and dynamic AddHRVr algorithms. The dynamic AddHRVr algorithm, adapting for previous HRV (60 min), constitutes a promising alternative to a static algorithm. The present simulation approach indicated that a dynamic AddHRVr algorithm shows remarkably different characteristics compared to a static algorithm. First, the number of delivered triggers was significantly lower for various silent settings (from 10 min to 40 min). Second, the power analysis derived from the bootstrap method showed a different pattern of peaking regions (although there was no mean difference of power). While the dynamic AddHRVr algorithm showed the highest power at more shorter-lived settings (e.g., 7 out of 10), the static algorithms were based on longer RMSSD windows lengths (e.g., 13 out of 30). Third, the observed effects were more positive for the dynamic algorithm compared to the static algorithm. A closer look at the most powerful settings indicated that the dynamic algorithm showed the expected positive effects, and the static algorithm showed even negative effects and a reversed pattern of delivered triggers. This is an astounding result and indicates the high complexity of algorithm settings, since positive odds were expected for both the static and the dynamic algorithm. However, negative effect sizes argue for the assumption that some of the simulated static AddHRVr algorithm settings are not valid and therefore not detecting stress. This interpretation is in line with the observation that the most powerful setting of the static algorithm was 0.624 (i.e., 13 out of 30), which was relatively low. Furthermore, the power illustrations of the static and dynamic algorithms presented in indicate that the pattern of peaking for the static algorithm seems to be less localized compared to the dynamic algorithm. These observations are in line with the assumption that the static algorithm might deliver invalid triggers not associated with increases of stress and therefore more likely assess other psychological aspects of HRV reductions. Furthermore, it should be noted that the simulation is based on the assumption that increases of objective stress should be triggered within 20 min following the beginning of an operation. Therefore, the simulation approach also includes rapid psychophysiological fluctuations, for which the dynamic AddHRVr algorithm seems to be more sensitive. This assumption is further strengthened by the observation that the static algorithm only achieved a power of 0.424 with an odd of 69% for the setting 7 out of 10 (which was the most powerful setting for the dynamic algorithm). However, when focusing on longer effects (within 40 min) and longer operations (at least 40 min), the static algorithms showed better power scores. The setting of 1 out of 10 reached even a power of 0.980 with percentage change in odds of 365% (with a silent setting of 20 min; for a 3D illustration of power see ; for odds see ). However, this setting would deliver 46.61 triggers within 24 h per participant, which would not be applicable in psychophysiological assessment studies. Furthermore, it should be noted that in addition to a static linear AddHRVr algorithm , also a static inverse AddHRVr algorithm was reported in the literature . The inverse algorithm assumes a linear association between HRV and the inverse of bodily movement. Correspondingly, the intercept in a static inverse approach means HRV at very high levels of bodily movement (i.e., infinite), while in a static linear regression approach, the intercept represents the participants’ HRV without movement, which allows its continuous replacement with the measured HRV (at low levels of bodily movement). Therefore, the dynamic algorithm cannot easily be transferred to the inverse approach, hampering a direct comparison in this study. Nevertheless, we additionally simulated the settings of an inverse AddHRVr algorithm by means of the available data. This simulation indicated good power scores with 0.804 (3D illustration ) as well as percentage change in odds of 97% for the setting 6 out of 12 (3D illustration ). Furthermore, this setting of the inverse algorithm was associated with 19.58 triggers within 24 h. This indicates that a static inverse AddHRVr algorithm can outperform a static linear algorithm, and that the performance outcome of the inverse algorithm is largely comparable with the performance of the dynamic approach, thus ultimately underlining the validity of the dynamic AddHRVr algorithm to detect transitions of stress. In contrast to previous studies, we indicated that dynamic (and the inverse) AddHRVr algorithms can predict objective changes in stressfulness in a sample of firefighters quite early (within 20 min) accompanied by increases of stress and negative affect . The dynamic algorithm settings can detect these situations, thus arguing for the application of interactive psychophysiological ambulatory assessments to unobtrusively detect situations of interest (i.e., stressful moments in an individual’s life). At this point of argumentation, some might mention that the odds to detect increases of objective stressfulness did not reach high levels as might be achieved by other methods, such as deep learning networks and artificial intelligence approaches , and increasing the sample size and using data-driven approaches (e.g., machine learning, compressed deep learning) could be an alternative approach for the present study. It should be held in mind, however, that firstly, these machine learning approaches do not work online . However, compressed deep learning networks to classify heartbeats and arrythmia were recently developed . Hence, compressed deep learning networks to detect stress in everyday life could constitute promising tools in ambulatory research in the future. Secondly, although the present study was concerned with the detection of objective transitions of stress for validation purposes, the focus of the AddHRVr algorithm is to trigger psychologically meaningful increases of stress and negative affective states (i.e., reduced resilience) independent of metabolic demands, which might also occur during routine operations (similar argumentation, see ). Thirdly, the AddHRVr algorithm is a top-down and theory driven approach, which contrasts with data driven and bottom-up machine learning approaches. Alternative algorithms solely taking data from the acceleration sensor into account might provide even better classifications of transitions of objective stress, since leaving the fire station (and showing strong increases in physical activity) is strongly associated with routine and emergency operations and therefore increases of objective stress (see, e.g., , thus indicating improved classification of emergency episodes in firefighters by additionally analyzing acceleration sensor data). The main aim of this study, however, was to demonstrate that an AddHRVr algorithm can systematically trigger situations which are associated with increased stress and negative affect independent of metabolic demands and bodily movements and did not directly target the issue of (external) validity, where the relatively small sample size of the current study might have been an issue. This is in some contrast to other research in this field and has to take more than the observed sensitivity into account. In addition to the power, also the (direction of) effect size and the number of delivered triggers are essential parameters. Researchers should decide about potential online applications of specific algorithm setting in future field studies after careful consideration of these parameters. Therefore, this simulation study provides additional evidence for theory-driven psychophysiological assessment in daily life. Fourth, in contrast to most bottom-up methods, the applied algorithm did not use all available information at once but works sequential, which allows online application. Nevertheless, future simulation studies should attempt to further increase the sensitivity of the AddHRVr algorithm function to detect situations of increased stress by means of static (linear and inverse) as well as dynamic algorithm approaches. On a final note, it is important to keep in mind that not only stress is associated with decreased HRV , but also perseverative cognition, worry, and rumination ), anxiety , depression , lower quality of interactions , and even activated/arousal-related positive (motivational) states assessed in everyday life . This nicely outlines the potential applications of static and dynamic algorithms in future ambulatory research. When comparing the present findings with the simulation study of Schwerdtfeger and Rominger, it seems likely that different phenomena might be associated with different patterns of (momentary) HRV reductions . This further increases the need for further simulation studies to come up with algorithm settings for static and dynamic AddHRVr algorithms allowing one to trigger different psychologically meaningful situations in everyday life.
Schwerdtfeger and Rominger concluded that we can probably detect meaningful psychosocial episodes by an online analysis of HRV enabled by ECG devices that have several sensors on board . However, this search of a needle in a haystack needs considerable methodological effort and simulations of various AddHRVr algorithms in different samples assessing various indicators of stress (i.e., subjective and objective), affect, and resilience. This study of firefighters adds evidence to this line of research and suggests that dynamic (and inverse!) AddHRVr algorithms could detect objective transitions of stress that are associated with higher levels of perceived stress and negative affect. Therefore, this study contributes to the development of an interactive psychophysiological ambulatory assessment approach and argues for the assumption that several algorithm adjustments might exist that show similar properties to trigger psychologically meaningful episodes in our daily lives.
|
Molecular epidemiology and distribution of | af07ea7e-cb1f-4ef3-a004-e957e829ed39 | 11662326 | Biochemistry[mh] | Cryptococcal meningitis is the most common clinical manifestation of meningitis. The climatic and soil conditions in the southeastern coastal regions of China, along with various geographical factors, create favorable environments for the growth and reproduction of fungi. Although Cryptococcus neoformans is commonly found in the natural environment, it was not recognized as a significant human pathogen until the late 1960s. Advances in molecular detection techniques have resulted in significant improvements in the understanding of Cryptococcus neoformans in clinical settings. However, the late detection rate of acquired immune deficiency syndrome (AIDS) patients in China is generally high, and their compliance with treatment is often poor, resulting in a relatively large number of Cryptococcus neoformans infections in the country, which imposes a substantial burden on the public health system. A poor prognosis and high mortality rate are often observed in the absence of optimal treatment or in cases of late detection of Cryptococcus neoformans infection, particularly among immunocompromised patients. Therefore, it is essential to conduct nationwide molecular epidemiological surveys of Cryptococcus neoformans and to investigate its pathogenesis, with the objective of providing valuable references for clinical practitioners for its diagnosis and treatment, ultimately aiming to improve patient outcomes and reduce the public health burden. Cryptococcus is a genus of opportunistic fungal pathogens, with the two main species, Cryptococcus neoformans and Cryptococcus gattii , causing human diseases. Among them, infections caused by Cryptococcus neoformans , especially in immunocompromised hosts, are more common and account for over 90% of all pathogenic strains. Cryptococcus neoformans primarily affects the central nervous system, often leading to meningitis as a typical symptom. The prognosis is poor, and patients have a high mortality rate. Even in developed countries, the 1-year mortality rate for patients infected with Cryptococcus neoformans exceeds 20%. Although Cryptococcus neoformans is commonly found in the natural environment, it was not recognized as a common human pathogen until the late 1960s. Advances in molecular detection techniques have resulted in technical advancements allowing further understanding of Cryptococcus neoformans in clinical settings. This report aims to fill a knowledge gap by providing insights into the molecular epidemiology of Cryptococcus neoformans in human immunodeficiency virus (HIV)-positive patients in China. By doing so, it offers valuable guidance for physicians in clinical practice, particularly for the diagnosis and management of cryptococcal infections in immunocompromised individuals.
Ethics approval This study adheres to the relevant standards outlined in the Declaration of Helsinki. Written informed consent was obtained from all subjects involved in the study. This study was approved by the Ethics Committee of Tongji University School of Medicine (Approval number: 2022-006-005), and the subjects were informed, according to the Declaration of Helsinki, before study initiation. Experimental methods This study employs a cross-sectional design. DNA extraction was performed using the rapid DNA extraction method as described by Guo et al. Polymerase chain reaction (PCR)-specific amplification was conducted using serotype- and mating-type-specific primers (STE20Aa, STE20Aα, STE20Da, and STE20Dα) as outlined by Yan et al. . Following the consensus scheme proposed by the International Society for Human and Animal Mycology and the Enhancing the Reporting of Observational Studies in Epidemiology Statement: Guidelines for Reporting Observational Studies, – PCR amplification was performed for seven housekeeping genes ( CAP59, GDP1, LAC1, PLB1, URA5, IGS1 , and SOD1 ) of Cryptococcus neoformans . The primer sequences and reaction conditions are detailed in . Identifying details of patients have been omitted. Clinical isolates were identified through direct microscopic examination of samples using India ink staining and culture on Sabouraud dextrose agar supplemented with dopamine and urea. Following culture, all isolates were confirmed using a VITEK MS mass spectrometer for strain identification.
This study adheres to the relevant standards outlined in the Declaration of Helsinki. Written informed consent was obtained from all subjects involved in the study. This study was approved by the Ethics Committee of Tongji University School of Medicine (Approval number: 2022-006-005), and the subjects were informed, according to the Declaration of Helsinki, before study initiation.
This study employs a cross-sectional design. DNA extraction was performed using the rapid DNA extraction method as described by Guo et al. Polymerase chain reaction (PCR)-specific amplification was conducted using serotype- and mating-type-specific primers (STE20Aa, STE20Aα, STE20Da, and STE20Dα) as outlined by Yan et al. . Following the consensus scheme proposed by the International Society for Human and Animal Mycology and the Enhancing the Reporting of Observational Studies in Epidemiology Statement: Guidelines for Reporting Observational Studies, – PCR amplification was performed for seven housekeeping genes ( CAP59, GDP1, LAC1, PLB1, URA5, IGS1 , and SOD1 ) of Cryptococcus neoformans . The primer sequences and reaction conditions are detailed in . Identifying details of patients have been omitted. Clinical isolates were identified through direct microscopic examination of samples using India ink staining and culture on Sabouraud dextrose agar supplemented with dopamine and urea. Following culture, all isolates were confirmed using a VITEK MS mass spectrometer for strain identification.
A total of 64 newly isolated Cryptococcus strains were obtained from HIV patients in multiple hospitals across 19 cities, including the southeastern coastal and southwestern regions of China, from January 2018 to April 2023 . Among the 64 patients, there were more male patients than female patients. The age of the patients ranged from 24 to 59 years, with a median age of 38.5 years . The sources of the isolates were as follows: 51 isolates (79.69%) were obtained from cerebrospinal fluid, 5 isolates (7.81%) were obtained from blood, 4 isolates (6.25%) were obtained from sputum, and 4 isolates (6.25%) were obtained from other sites. All clinical isolates were identified as Cryptococcus neoformans through direct microscopic examination using India ink staining and culture on Sabouraud dextrose agar supplemented with dopamine and urea. The isolates were classified as Aα, VN I type. Multilocus sequence typing analysis revealed three different sequence types (STs): ST5 in 57 cases (89.06%), ST32 in 5 cases (7.81%), and ST186 in 2 cases (3.13%).
The high incidence and mortality rates of cryptococcosis have rendered it a significant global public health concern, particularly in regions with limited healthcare resources, where it imposes a substantial burden on local healthcare systems. The development and widespread use of antiretroviral therapy has contributed to a decline in the incidence of HIV-associated cryptococcal meningitis. Nevertheless, South Africa continues to report over 100,000 new cases annually, accounting for 73% of new cases globally, and cryptococcosis has become the fourth leading cause of infectious disease-related deaths in the region. Cryptococcal disease primarily affects HIV-infected individuals as an opportunistic infection and is a major cause of mortality among late-stage HIV patients. Furthermore, in China, the rising prevalence of malignancies, organ transplantation, and immunomodulatory therapies in recent years has resulted in cryptococcosis emerging among a new population of immunocompromised hosts, leading to an increase in its incidence. Cryptococcus neoformans , the causative agent of cryptococcosis, has numerous hosts, is commonly found in soil, trees, and bird excreta, and has a global distribution. Based on domestic data in China, the detection rate of Cryptococcus neoformans in pigeon feces in the southeastern coastal regions (such as Guangdong and Fujian provinces) is significantly higher than that in other regions. – Therefore, it is speculated that Cryptococcus neoformans primarily spreads among humans and may cause various tissue infections through respiratory transmission or direct tissue inoculation, such as blood transfusion in cryptococcal sepsis patients or solid organ transplantation patients. – Because of advances in detection techniques, Cryptococcus neoformans has been classified into serotypes A, D, and a mixture of A and D, referred to as AD, based on differences in capsule polysaccharide composition. Various Cryptococcus neoformans serotypes exhibit differences in virulence, environmental distribution, and susceptibility to antifungal drugs. Among these, serotype A strains are globally predominant and represent the major clinical isolates responsible for infections. Moreover, the advent of molecular detection techniques has ushered in an era of molecular typing for Cryptococcus neoformans . Through PCR fingerprinting, Cryptococcus neoformans is primarily categorized into four genotypes: VN I, VN B/VN II, VN III, and VN IV. Additionally, molecular detection of Cryptococcus neoformans and Cryptococcus gattii has uncovered significant genetic heterogeneity between the two species, thereby enhancing our understanding of the characteristics of specific strains. In this study, all isolated Cryptococcus neoformans strains exhibited serotype A, mating type α, and genotype VN I, consistent with previous reports indicating that Aα/VN I is the globally dominant type of Cryptococcus neoformans. Three STs were identified, with ST5 being the predominant type. Given that ST5 originated in East Asia and demonstrates good adaptability to the Asian environment, several studies have shown that ST5 occupies an exceptionally dominant position in China and East Asia. This study indicates that infections caused by Cryptococcus neoformans are predominantly observed in male HIV patients, which can be attributed to the compromised immune function of these individuals, rendering them more susceptible to the pathogen. Furthermore, the majority of patients originated from the southeastern coastal and southwestern regions of China, aligning with the epidemiology of AIDS in the country. These regions are characterized by high population mobility, and their dense populations, along with favorable geographical and climatic conditions, facilitate the spread of cryptococcal infections. The findings reveal that most cases occur in middle-aged and young adults, primarily between the ages of 24 and 59 years, with an average age of 36.31 ± 9.13 years. Most patients presented with central nervous system-related clinical symptoms, such as headache, vomiting, and signs of meningeal irritation, while a smaller number occasionally reported blurred vision or disturbances of consciousness. The primary route of transmission for Cryptococcus neoformans infection is inhalation of aerosolized spores, and the nonspecific nature of the clinical symptoms often leads to incidental discovery of the infection during tests for other diseases. This study also revealed that genotype VN I is currently the dominant strain in China, accounting for 63.33% of cases, which is comparable to the value of 76% reported by Firacative et al. However, notable differences exist in the geographical distribution of various genotypes observed globally. For instance, in Slovenia, VN V is the predominant genotype, while studies in Australia indicate that VN I and VN II are present in roughly equal proportions. Compared with other regions worldwide, Cryptococcus neoformans demonstrates lower genetic diversity in China, with ST5 constituting an overwhelming majority at 90% of cases, highlighting the need for increased attention to this infection. Despite the overall low incidence of cryptococcosis, because of the absence of specific clinical symptoms, limited sample sizes, and lack of in vitro drug sensitivity testing for Cryptococcus neoformans , there is a pressing need for further research and antifungal drug sensitivity studies to provide valuable references for clinical diagnosis and treatment. In summary, this study elucidated the epidemiological patterns and clinical characteristics of HIV-associated cryptococcal infection. The findings indicated that middle-aged male individuals constituted the primary affected population in the coastal southeast and southwest regions of China over the past 5 years. The VN I genotype emerged as the predominant genotype, with ST5 identified as the main ST associated with cryptococcal infection. However, the small sample size in this study may have impacted the results, potentially leading to discrepancies when compared with the multiple genotypes reported in international studies.
|
Validation of the VisionArray® Chip Assay for HPV DNA Testing in Histology Specimens of Oropharyngeal Squamous Cell Carcinoma | 5e683a4c-fcb3-4676-9b1a-f06cf8fa4544 | 10973319 | Anatomy[mh] | The detection of human papillomavirus (HPV)-related oropharyngeal squamous cell carcinoma (OPSCC) is increasingly important in the routine clinical setting. Establishing the HPV status adds valuable information for staging and prognostication of patients with OPSCC. Additionally, in cytology and histology specimens from patients with cervical squamous cell carcinoma metastasis and unknown primary tumor, detection of oncogenic (or high-risk) HPV can further guide clinicians to the primary tumor origin (i.e., palatine and lingual tonsils) . Many clinical trials are currently investigating new treatment modalities and de-escalation schemes for HPV-related OPSCC and may demand an upfront HPV status prior to inclusion . Recent data from the multinational and multicenter EPIC study have shown that discordant HPV status in OPSCC (p16−/HPV + or p16 + /HPV−) have a significantly worse prognosis than patients with p16 + /HPV + OPSCC and therefore recommend specific HPV testing being performed in clinical trials along with p16 immunohistochemistry (IHC) . HPV testing is most often performed by surrogate marker p16 IHC, as it is recommended by the College of American Pathologist (CAP) , and rarely in combination with high-risk (HR)-HPV testing using DNA PCR or E6/E7 mRNA in situ hybridization (ISH). mRNA ISH is considered the gold standard as it determines transcriptionally active HPV. Current p16 and HPV testing guidelines in head and neck cancer are mainly based on the CAP evidence-based guidelines from 2018, which states that all new OPSCC patients should be tested with p16 IHC with a 70% nuclear and cytoplasmic positivity as cutoff . During the past decade, our institution has continuously implemented and validated assays for PCR-based HPV testing that could be applied for both histology and cytology specimens . Currently, we have a database with nearly 3000 OPSCC patients, which have been previously tested for p16 and HPV DNA including data on HPV genotyping . In the pursue of an in-house HPV-specific assay for routine clinical use in combination with p16 immunohistochemistry, we implemented a PCR-based DNA assay, VisionArray® HPV Chip, in 2017, which allows for simultaneous genotyping of 41 clinically relevant HPV types. However, it has not been validated for head and neck squamous cell carcinomas and therefore we aimed to validate the assay for formalin-fixed and paraffin-embedded (FFPE) samples of OPSCC using the previously applied standard pan-HPV DNA PCR as a reference.
Patients and Samples This retrospective study retrieved archived FFPE samples from patients diagnosed with OPSCC (n = 101) between 2018 and 2019 (either HPV DNA-positive or negative) and a benign group of tumor samples consisting of Warthin’s tumors (adenolymphoma, n = 20) diagnosed between 2013 and 2014 and branchial cleft cysts of the lateral neck (n = 14) diagnosed between 2013 and 2015. Samples consisted of a biopsy or resection specimens and were reviewed by expert head and neck pathologists at the Department of Pathology, Copenhagen University Hospital—Rigshospitalet, Denmark. All patients with OPSCC were tested with p16 IHC in which strong and uniform p16 staining (both cytoplasmic and nuclear) in > 70% of tumor cells was considered positive. Extraction of DNA DNA was extracted and purified from one to four 10-μm slices of FFPE samples using either an automated QIAcube and Qiagen’s GeneRead DNA FFPE Kit (#180134, Qiagen, Hilden, Germany) according to the manufacturer’s recommendations (OPSCC samples) or by manual extraction (all other samples and any secondary DNA extractions). For the manual extraction, 200 µL Tris–EDTA buffer solution was added to the FFPE slices before melting the paraffin at 95 °C for 10 min while gently vortexing. The samples were cooled to 56 °C, making the paraffin gather as a solid layer on top of the sample, and 20 µL Proteinase K (Qiagen) was added through a small hole in this layer. Finally, the samples were incubated over night at 56 °C while gently vortexing. The DNA concentration was measured using either DeNovix DS-11 (DeNovix, Wilmington, DE, USA) or NanoDrop (ThermoFischer, Waltham, MA, USA). Pan-HPV DNA PCR of OPSCC Samples HPV status of OPSCC samples was assessed by initial pan-HPV testing by PCR using the general primers GP5+/6+ as described earlier . DNA quality was confirmed with a GAPDH control. PCR products were visualized on a pre-cast 2% agarose E-gel (Invitrogen, Waltham, MA, USA) using the Gel Doc EZ system (Bio-Rad, Hercules, CA, USA). The image was assessed with the Molecular Imager and Image Lab software (both from Bio-Rad) according to the manufacturer’s instructions. The expected amplicon sizes were approximately 150 base pairs for GP5+/6+ and 200 base pairs for GAPDH. Samples positive for GP5+/6+ and GAPDH were deemed HPV-positive, and samples negative for GP5+/6+ but positive for GAPDH were deemed HPV-negative. The turnaround time for this assay was a mean of four calendar days . VisionArray® HPV Chip Assay All samples were analyzed for HPV status and genotype using the VisionArray® HPV Chip 1.0 system (#VA-0001, ZytoVision, Bremerhaven, Germany). For the OPSCC specimens, material from the same DNA extraction as for the initial pan-HPV DNA PCR assay was used. The VisionArray® HPV Chip detects DNA from 41 clinically relevant HPV genotypes classified as 12 high risk (HPV16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59), 12 probably high risk (HPV26, 34, 53, 66, 67, 68a, 68b, 69, 70, 73, 82IS39, and 82MM4), and 17 low risk (HPV6, 11, 40, 42, 43, 44, 54, 55, 57, 61, 62, 72, 81CP8304, 83MM7, 84MM8, 90, and 91) according to the manufacturer. The assay was performed according to the manufacturer’s recommendations using the VisionArray HPV PreCise Master Mix (ZytoVision #ES-0007) followed by the detection kit (ZytoVision, #VK-0003). Chip scans were analyzed using the VisionArray MultiScan E4302 software with a threshold of 25. The turnaround time for this assay was up to 24 h. Statistics Statistical analyses calculating the sensitivity, specificity, positive and negative predictive values were performed using IBM SPSS Statistics for Windows, Version 28.0. Armonk, NY: IBM Corp.
This retrospective study retrieved archived FFPE samples from patients diagnosed with OPSCC (n = 101) between 2018 and 2019 (either HPV DNA-positive or negative) and a benign group of tumor samples consisting of Warthin’s tumors (adenolymphoma, n = 20) diagnosed between 2013 and 2014 and branchial cleft cysts of the lateral neck (n = 14) diagnosed between 2013 and 2015. Samples consisted of a biopsy or resection specimens and were reviewed by expert head and neck pathologists at the Department of Pathology, Copenhagen University Hospital—Rigshospitalet, Denmark. All patients with OPSCC were tested with p16 IHC in which strong and uniform p16 staining (both cytoplasmic and nuclear) in > 70% of tumor cells was considered positive.
DNA was extracted and purified from one to four 10-μm slices of FFPE samples using either an automated QIAcube and Qiagen’s GeneRead DNA FFPE Kit (#180134, Qiagen, Hilden, Germany) according to the manufacturer’s recommendations (OPSCC samples) or by manual extraction (all other samples and any secondary DNA extractions). For the manual extraction, 200 µL Tris–EDTA buffer solution was added to the FFPE slices before melting the paraffin at 95 °C for 10 min while gently vortexing. The samples were cooled to 56 °C, making the paraffin gather as a solid layer on top of the sample, and 20 µL Proteinase K (Qiagen) was added through a small hole in this layer. Finally, the samples were incubated over night at 56 °C while gently vortexing. The DNA concentration was measured using either DeNovix DS-11 (DeNovix, Wilmington, DE, USA) or NanoDrop (ThermoFischer, Waltham, MA, USA).
HPV status of OPSCC samples was assessed by initial pan-HPV testing by PCR using the general primers GP5+/6+ as described earlier . DNA quality was confirmed with a GAPDH control. PCR products were visualized on a pre-cast 2% agarose E-gel (Invitrogen, Waltham, MA, USA) using the Gel Doc EZ system (Bio-Rad, Hercules, CA, USA). The image was assessed with the Molecular Imager and Image Lab software (both from Bio-Rad) according to the manufacturer’s instructions. The expected amplicon sizes were approximately 150 base pairs for GP5+/6+ and 200 base pairs for GAPDH. Samples positive for GP5+/6+ and GAPDH were deemed HPV-positive, and samples negative for GP5+/6+ but positive for GAPDH were deemed HPV-negative. The turnaround time for this assay was a mean of four calendar days .
All samples were analyzed for HPV status and genotype using the VisionArray® HPV Chip 1.0 system (#VA-0001, ZytoVision, Bremerhaven, Germany). For the OPSCC specimens, material from the same DNA extraction as for the initial pan-HPV DNA PCR assay was used. The VisionArray® HPV Chip detects DNA from 41 clinically relevant HPV genotypes classified as 12 high risk (HPV16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59), 12 probably high risk (HPV26, 34, 53, 66, 67, 68a, 68b, 69, 70, 73, 82IS39, and 82MM4), and 17 low risk (HPV6, 11, 40, 42, 43, 44, 54, 55, 57, 61, 62, 72, 81CP8304, 83MM7, 84MM8, 90, and 91) according to the manufacturer. The assay was performed according to the manufacturer’s recommendations using the VisionArray HPV PreCise Master Mix (ZytoVision #ES-0007) followed by the detection kit (ZytoVision, #VK-0003). Chip scans were analyzed using the VisionArray MultiScan E4302 software with a threshold of 25. The turnaround time for this assay was up to 24 h.
Statistical analyses calculating the sensitivity, specificity, positive and negative predictive values were performed using IBM SPSS Statistics for Windows, Version 28.0. Armonk, NY: IBM Corp.
We included samples from 101 patients with OPSCC; the average age was 61 years (range: 40–84 years) with the majority being male (72%). Eighty samples were HPV-positive using the pan-HPV PCR of which 74 samples were corresponding p16-positive, five were p16-negative, and one had unknown p16 status due to lack of remaining tumor material. All 80 samples were correspondingly HPV-positive using the VisionArray® HPV Chip as illustrated in the flowchart in Fig. . The predominantly tested genotype was HPV-type 16 (n = 71, 89%), followed by HPV33 (n = 5), HPV18 (n = 1), HPV35 (n = 1), HPV59 (n = 1), and HPV67 (n = 1). An example of an HPV-type 16-positive and HPV-negative chip scan using the VisionArray® HPV Chip is illustrated in Figs. and . Twenty-one patients were HPV DNA-negative using the pan-HPV DNA PCR of which corresponding p16 was negative in 19 and positive in two samples. With the VisionArray® HPV Chip, 18 samples were tested HPV-negative (including the mentioned p16-negative/HPV DNA-positive samples), four of these after repeated analysis since the first analysis did not meet the threshold requirements. Three samples were excluded due to insufficient DNA quality or lack of FFPE material for DNA extraction. A total of 20 Warthin’s tumors of the salivary glands and 14 branchial cleft cysts of the neck were included, all of which tested HPV DNA-negative with both methods and were p16-negative. Sensitivity and Specificity The overall sensitivity and specificity of the VisionArray® HPV Chip assay were 100% [95% CI 95.5%; 100.0%] and 96.3% [95% CI 87.3%; 99.6%] and the positive predictive value and negative predictive value were 97.6% [95% CI 91.5%; 99.7%] and 100% [95% CI 93.2%; 100%], respectively.
The overall sensitivity and specificity of the VisionArray® HPV Chip assay were 100% [95% CI 95.5%; 100.0%] and 96.3% [95% CI 87.3%; 99.6%] and the positive predictive value and negative predictive value were 97.6% [95% CI 91.5%; 99.7%] and 100% [95% CI 93.2%; 100%], respectively.
The VisionArray® HPV Chip was successfully implemented in our laboratory with good analytical capabilities illustrated by the high sensitivity and specificity in comparison to our previously used standard reference assay, pan-HPV DNA PCR which had a sensitivity of 86.7% and specificity of 92% as shown previously . Another advantage of the VisionArray® HPV Chip assay is that it provides simultaneous HPV genotyping as a same-step procedure detecting 41 clinically relevant genotypes classified as low risk, probably high risk, and high risk. In the current validation cohort, the most frequently detected HPV genotype was HPV16 which reflects the demographic patient group from Eastern Denmark . Prior to this study, we had to acquire the genotype as a secondary analysis using next-generation sequencing which was more time consuming and expensive. To our knowledge, HPV genotyping does not have clinical implications in terms of treatment or prognostication but it may provide important information for research purposes in epidemiologic studies and in the etiopathogenesis of OPSCC. In future, it may add value in relation to liquid biopsies with cell-free HPV DNA in the follow-up surveillance after treatment . A recent study has investigated the role of high-risk HPV genotypes on survival in patients with oropharyngeal cancer, and found no differences when comparing HPV16 with non-HPV16 genotypes . The subgroup analysis indicated that the group of patients with genotypes HPV33 and HPV35 has a significantly better 5-year overall survival than other non-HPV16 genotypes. Advanced HPV testing including genotyping can be adopted by the pathologist in other head and neck entities. It has an important role in the diagnostics of sinonasal malignancies, in particular the HPV-related Multiphenotypic Sinonasal Carcinoma, which requires the presence of certain HPV genotypes . The 5th edition of the world health organization classification of head and neck tumors recommends that high-risk HPV must be demonstrated by in situ hybridization or PCR-based techniques, specifically to include type 33. p16 alone is not sufficiently specific to make the diagnosis . HPV genotyping is often requested by otolaryngologists after excision of respiratory papillomas where approximately 30% of head and neck papillomas are related to HPV . Establishing low-risk HPV genotypes such as HPV6 and HPV11 plays an important role during the patient consultation. The VisionArray® HPV Chip assay has proven to be highly suitable in clinical practice, as the HPV testing can be performed within 24 h compared with the more time-consuming pan-HPV DNA PCR where the turnaround time is up to four days. This is especially useful if applied on cytology specimens, and if HPV-positive, it facilitates the diagnostic work-up and treatment for patients who present with a metastatic squamous cell carcinoma of the neck and unknown primary. HPV DNA testing is feasible on previously stained cytology smears as previously shown . The use of p16 staining on cytology specimens is not recommended as there is no validated cutoff value and it requires the preparation of a cell block, with a risk of not having viable tumor cells . This study was strengthened by the large validation cohort including both HPV-positive and negative OPSCC, and benign tumors previously tested for HPV DNA. Among the HPV-negative patient samples, we experienced a few samples that had either poor DNA quality or insufficient DNA concentration for HPV testing which could be explained by several factors. Firstly, we utilized 4- to 5-year-old archived FFPE material where the DNA quality can be varying, and secondly, some samples consisted of very small biopsies with none or limited tumor tissue left after being used for p16 staining and the reference pan-HPV DNA PCR assay. As part of every implementation process, we must highlight the importance of the pre-analytical steps in the laboratory to avoid contamination and cross-contamination between patient samples that could lead to inaccurate results. The HPV testing algorithms and choice of assay vary among laboratories across the world and are dependent on the resources, capacity, and staff at the different institutions. A full review of all HPV testing assays is beyond the scope of this paper; nonetheless, the most commonly used targets are HPV DNA, HPV RNA, viral oncoproteins, cellular proteins, and HPV-specific serum antibodies . In the CAP guidelines, p16 IHC is recommended for biopsy/resection specimens and is the most widely used surrogate marker for HPV . The guidelines further state that additional HPV-specific testing may be done based on the decision of the pathologist and/or treating clinician, or in the context of a clinical trial. p16 IHC is suitable as a stand-alone assay in most clinical settings for histologic specimens deriving from oropharyngeal tumors as it provides acceptable sensitivity and specificity, is much more cost-effective than molecular tests, has a short turnaround time, and is easy to analyze. However, the additional use of specific HPV DNA testing should be strongly considered for patients that are p16-positive and potential candidates for clinical trials that offer either de-escalation or intensification of treatment. This is based on a recently published multinational study which investigated discordant p16/HPV oropharyngeal cancers and its prognostic implications . The authors argue that an exception could be made in some geographical regions which is associated with a high p16/HPV concordance (i.e., North America). Interestingly, the study showed that if p16 IHC alone is used to determine HPV status, 8.1% of p16-positive patients worldwide and up to almost 26% in regions of low HPV-attributable fractions such as southern Europe would be incorrectly classified as having HPV-related tumors. Therefore, at our institution, HR-HPV testing is currently being applied on either a resection specimen of a metastasis or on the primary tumor (biopsy/resection) along with p16 IHC as the majority of our patients participate in clinical trials. In the few discordant p16-positive and HPV DNA-negative cases, we request HPV genotyping at a different laboratory as a quality assessment, to ensure that we do not fail to detect a genotype that is not detected by the VisionArray® HPV Chip assay which has not been an issue to date. The recommended gold standard method for HR-HPV testing is the mRNA ISH which detects transcriptionally active HPV E6/E7 oncogenes. This assay has previously been validated against p16 and HPV DNA by several institutions who agree on its excellent analytical capabilities, technical feasibility, and fast turnaround . Furthermore, it provides direct visualization of the HPV-positive staining which further strengthens the sensitivity and specificity. However, the assay is limited by its use on research platforms only; it is less cost-effective than p16 IHC and PCR-based assays and currently only allows for the detection of mRNA of up to 18 HR-HPV genotypes in a single cocktail probe. Based on our validation cohort, it would have failed to detect HPV-type 67 in a single patient which is classified as a “probably high risk” HPV type using the VisionArray® HPV Chip. Another limitation is that the ISH method requires a morphologically well-preserved specimen where the cell nuclei are well visualized. On the contrary, the PCR-based analysis works well on crushed tumor cells like the defrosted frozen section tissue specimen or laser coagulated tissue specimens. In the future, it is necessary to clarify when to perform HPV mRNA ISH in routine clinical use in relation to the existing assays. In conclusion, we found that the VisionArray® HPV Chip assay can be recommended for HR-HPV testing in FFPE tissue samples from OPSCC providing both a fast and simultaneous genotyping for 41 clinically relevant HPV types.
|
Synthesis of recovery patterns in microbial communities across environments | 724507b0-6b0a-4184-a6e2-2d9ca1db2961 | 11071242 | Microbiology[mh] | Bacterial communities are ubiquitous , dynamic , and sensitive to environmental change . A wide range of literature explores microbiome responses to rapid environmental change in different environments , consistently revealing that microbial communities are affected by disturbance, and generally do not recover their pre-disturbance composition . Historically, experimental procedures, designs, and hypotheses regarding the recovery of microbiomes following disturbance have developed in a largely field-specific manner (e.g., medical microbiology, soil microbiology, aquatic microbiology). Consequently, a comparison of community disturbance responses across microbial environments is lacking. Whether microbiomes from different environments exhibit responses to disturbance, and whether these responses are consistent with extant conceptual frameworks is a major gap in knowledge, especially considering growing anthropogenic pressures on microbial systems (e.g., pollutants, antibiotics, and climate extremes). Properties of the microbial environment likely affect the dominant responses of microbiomes to disturbance, but empirical comparisons of recovery across environments are scarce . Different microbial habitats have varying degrees of spatial and temporal heterogeneity, microbial species pool sizes, connectivity, and resource availability, all of which may affect community assembly processes , and likely result in different disturbance responses among environments. For example, animal gut microbiomes have relatively low diversity and are dispersal-limited due to selective pressures associated with host physiology that likely influence the recovery of the resident microbial diversity. In contrast, soil microbiomes are extremely diverse, but poorly connected , likely affecting recolonization following disturbance. The lack of host-driven selection in these systems, combined with high diversity may result in communities composed of different taxon when compared to their pre-disturbance state. Assessments of microbiome recovery often rely on indicator measurements that are environment-specific (e.g., host health in host-associated microbiomes or plant productivity in soil microbiomes), hindering the comparison of microbial disturbance responses across environments. By considering changes in diversity at multiple spatial scales (i.e., within and among samples) and the role of spatial connectivity in these responses, the metacommunity framework can help to synthesize and explicitly compare microbial community responses to disturbance across environments, and in turn provide new insights into the role of the environment in shaping these responses . To this end, publicly available 16S rRNA gene amplicon sequences can be leveraged to assess bacterial community responses as changes in bacterial richness (the number of taxa present in a sample) and composition (variation in taxon relative abundance between samples). Generally, we expect that across environments, community richness will decrease (Fig. a), as has been found across both aquatic and terrestrial ecosystems We also expect that community composition will change immediately after the disturbance, due for example to differential mortality and an altered competitive landscape . However, environmental change does not consistently result in decreased richness . Additionally, in microbes, disturbances may involve the addition of novel taxa (e.g., with sewage sludge amendments to soil ), which may result in richness increases. Over longer time scales following disturbance, richness may either fail to fully recover (at least within the period observed; e.g., ), recover fully , or even be higher following disturbance . Community composition is often a more robust indicator of biodiversity change than richness . Compositional changes can be assessed in terms of compositional variation among local communities , or dispersion , and the extent to which the community recovers to its pre-disturbance composition, or turnover (Fig. b). Following disturbance, dispersion can decrease, for example, if a stressor is selective and leaves only tolerant taxa to persist. Alternatively, dispersion can increase, for example, if the stressor is non-selective, or more generally if taxa that persist following disturbance differ . In microbiomes, the Anna Karenina Principle (AKP), derived primarily from the observation of host-associated communities, posits that healthy microbiomes are more stable, and thus less variable than disturbed ones . Given enough time, we expect the same taxa that dominated prior to a disturbance to recover their original abundances , especially in host-associated microbiomes, which can be modulated by the host . However, under some circumstances (e.g., strong or long disturbances, or invasion by novel taxa ), it is also possible that the disturbance could permanently alter relative abundance patterns in the community , resulting in communities that tend away from their pre-disturbance composition over time. Across environments, microbiomes have been shown to recover towards (negative turnover, e.g., ), or to drift away from (positive turnover, e.g., ), their pre-disturbance compositions. Importantly, both changes in dispersion and turnover can arise from changes in richness alone and null models have been developed that allow for the measurement of compositional change independent of changes in community richness . Meta-analyses focusing on the undisturbed temporal dynamics of microbial communities have shown consistent patterns across systems , but temporal disturbance responses have received less attention . To this end, we performed a synthetic analysis of the time series of disturbed aquatic, mammal-associated, and soil microbiomes. Across environments, we compared the initial response and subsequent recovery from disturbance in terms of community richness, dispersion, and turnover, and used null models to disentangle whether the observed changes in dispersion and turnover were due to changes in richness. Given the rapid rates of compositional turnover in microbiomes , we focused on 29 studies that repeatedly sampled the microbiomes within 50 days post-disturbance.
Dataset selection Using Google Scholar and Web of Science search engines (a list of keywords is available as ), we collated bacterial studies from systems where an experimental disturbance was imposed, and 16S rRNA gene amplicon sequencing datasets were available. Specifically, we chose studies that (1) were sequenced in Illumina or IonTorrent platforms; (2) sequenced the V3–V4 regions of the 16S rRNA gene; (3) were published after 2014; (4) repeatedly sampled microbial communities following a discrete disturbance or environmental change; (5) included samples from before the disturbance (i.e., controls), at least one (replicated) sample within a week after disturbance, and at least one (replicated) sample within a month after disturbance; and, (6) included experimental triplicates (i.e., three samples per time point). Criteria 1–3 ensured that the sequencing techniques were comparable between studies, and reduced the biases associated with sampling different regions of the 16S rRNA gene . Importantly, downstream analyses adopted a synthetic framework (i.e., we reprocessed sequences using a single approach described below), and samples from different studies were not combined. We applied criteria 4–6 to examine variation in rates of compositional change across environments. Criterion 6 ensured that the variability of the microbiomes at each time point could be measured. We defined a disturbance causally, as a “discrete, rapid environmental change” . We excluded datasets for which raw sequencing data were not publicly available and stopped data collection in October 2020. In all, datasets from 29 studies matched our criteria , see Table S for all datasets). We grouped these time series into three environmental categories: aquatic, mammal-associated, and soil microbiomes (including rhizosphere microbiomes). To further explore the role of disturbance type on the observed phenomena, we categorized disturbances according to their effect on the community as previously done in macroecology . Categories included mortality-inducing treatments (e.g., heat, azoxystrobin, ciprofloxacin, mechanical removal), mortality-inducing treatments combined with a microbial invasion (e.g., cefuroxime and Clostridium difficile ), mortality-inducing treatments combined with nutrient additions (e.g., heat and fertilizer additions), drought, invasions (e.g., the addition of Pseudomonas or C. difficile ), metal pollution (e.g., cadmium additions), nutrient additions (nitrate, chitin, diesel), nutrient additions including potential invasions (e.g., the addition of wastewater, the addition of diesel and a bacterial consortium), and PAH contamination. Sequence reprocessing and functional inference Raw 16S rRNA gene amplicon data and metadata were obtained from the NCBI Sequence Read Archives with the exception of two datasets, one of which came from another database, and the other was obtained directly from the authors (see Table S for accession numbers). We reprocessed sequences in R 3.4.3 using the dada2 package , and a conservative approach. To account for the different sequence qualities across datasets and to improve comparability in the reprocessed data, each dataset was inspected and reprocessed separately, and downstream statistical analyses accounted for between-study differences. Prior to processing, we visually inspected two samples per study with the plotQualityProfile to determine whether the reads had been merged prior to archiving, and to confirm that primers were not present. We only used forward reads because reverse reads were not available for all studies. Following inspection, we trimmed and truncated sequences on a study-by-study basis (see Table S for trimming and truncation lengths) to preserve a 90-bp segment, the minimum recommended in the Earth Microbiome Project protocols (and the maximum possible for studies that used Illumina HiSeq machines). We acknowledge that 90 bp is shorter than the length that is often used in amplicon sequencing studies and that longer segments would have detected higher microbial diversity; however, our aim was to compare diversity patterns across studies, for which short read lengths are suitable . Similar to downstream rarefaction, trimming all segments to the same length ensured a comparable degree of biodiversity detection across studies . We filtered, dereplicated, and chimera‐checked each read using standard workflow parameters . While we did not use taxonomic assignments in our analyses or compare amplicon sequence variants (ASVs, 100% sequence identity) across datasets, we assigned reads to ASVs with the SILVA v.132 training set to remove non-bacterial ASVs. Unassigned, bacterial ASVs (i.e., those classified as Bacteria) were preserved. Details about the percentage of reads lost at each step of sequence processing, per study, are included in Fig. S . As the samples included in these studies had a wide range of sequencing depths across samples (independent of the study environment), we randomly subsampled each sample to 1500 reads per sample to obtain a similar degree of biodiversity detection across studies. To ensure that our findings were not affected by observation depth, we additionally ran all analyses in parallel using the deepest possible observation depth (with a lower bound of 1500 reads per sample) for each study (Table S ). As our findings were consistent regardless of standardization (Fig. S ), we present only the results from the global rarefaction (i.e., 1500 reads per sample for all samples). To examine the completeness of each sample relative to the total richness in a community, we calculated sample completeness using the BetaC package . On average, our samples represented 0.96 ± 0.05 (mean ± sd) of the community. We removed any time points that had fewer than three experimental replicates for each time series. We coded time series so that time (days) ≥ 0 occurred after disturbance, and time < 0 denoted the pre-disturbance community. Calculation of richness and turnover metrics To examine variation in diversity across environments we calculated metrics that quantify diversity within samples (richness), and variation in taxon composition between samples (turnover). We calculated richness and turnover metrics using the phyloseq package’s data structure . We calculated species richness as the number of unique ASVs per sample (Hill q = 0), and Inverse Simpson’s index (Hill q = 2 ). We used Bray–Curtis dissimilarity to quantify two aspects of compositional variation. First, to describe the compositional variation between samples collected at the same time point, we calculated dispersion as the pairwise Bray–Curtis dissimilarity between all combinations of experimental replicates for each time point within each time series. For studies that resampled the same experimental unit (e.g., host organism or microcosm) over time, we excluded pairwise comparisons between samples from the same experimental units. Second, to quantify how composition changed following disturbance, we calculated turnover using pairwise dissimilarities between all control samples (i.e., pre-disturbance) and all subsequent replicate samples at each time point following disturbance. Using this approach, communities that recover their pre-disturbance state will have a negative slope estimate through time, while communities that become increasingly different from the pre-disturbance community over time will have a positive slope estimate (Fig. ). Because compositional changes can be due to changes in richness alone, we used a null model to disentangle compositional changes from changes in richness. We randomly permuted abundance values within each sample 1000 times, preserving the number of taxa (i.e., richness) for each sample, and recalculated turnover and dispersion metrics for each matrix to derive a null expectation for each. For both metrics, Z -scores were calculated as [12pt]{minimal}
$$}}^{{}}-{ }^{{}}}{{ }^{{}}}$$ u observed - μ expected σ expected , where [12pt]{minimal}
$${ }^{{}}$$ μ expected is the mean of the resamples, and [12pt]{minimal}
$${ }^{{}}$$ σ expected is the standard deviation. Z-scores are a powerful method to explore dissimilarities as deviations from a null expectation , perform particularly well for long-tailed microbiome data, and are recommended over subtraction-based dissimilarity partitioning methods . Statistical analyses evaluated dissimilarity and Z -score values in parallel. Significant (95% credible interval) patterns observed in both dissimilarity and Z -score data were attributed to changes in community richness, while significant patterns observed only in the Z -score data were attributed to changes in the relative abundance of taxa within the community. We present models fit to the raw dissimilarity metrics (i.e., Bray–Curtis) in the main text, and report where they differed from analyses of the Z -scores, which are presented in full in Figs. S and S . All code for bioinformatics processing and null models is available at https://github.com/drcarrot/DisturbanceSynthesis . Statistical analyses We fit generalized linear models to assess how richness, dispersion, and turnover change in response to disturbances using Bayesian methods and the brms package , and detailed information about each model is provided in the “ ” section. We performed all analyses at the ASV level. To quantify the immediate response of richness and dispersion to disturbance, we used before-after analyses that compared data from prior to the disturbance to samples taken < 4 days post-disturbance; to determine whether responses differed between environments (i.e., aquatic, mammal, soil), we included an interaction between the before-after and environment categorical covariates. Five studies were excluded from the before-after analyses due to a lack of samples (Table S ). To quantify how richness and dispersion changed through time following disturbance, we fit models to data from the first 50 days post-disturbance only (i.e., pre-disturbance samples were not included). Finally, to examine how composition changed from pre- to post-disturbance, we fit models to turnover that quantified compositional changes between the pre-disturbance controls and samples taken in the first 50 days post-disturbance. To determine whether changes following disturbance differed between environments, all-time series models included an interaction between time and environment. Time (in days) was fit as a continuous covariate and was centered by subtracting the mean duration from all observations prior to modeling. We fit all models with the same, hierarchical grouping (or random-effects) structure: to account for methodological variation between studies, we included varying intercepts for each study in all models; and, because many studies included more than one disturbance type (e.g., ), we included varying slopes and intercepts for time series within studies (i.e., one time series per disturbance type). Models fit species richness (i.e., the before-after and time series models) assumed a negative-binomial error distribution and a log-link function. In addition to the parameters and the grouping structure described above, the shape parameter of the negative-binomial distribution (that estimates aggregation) was also allowed to vary among studies. Models fit raw values of dispersion and turnover assumed Beta error, a logit-link function, and the precision parameter was allowed to vary among studies. Models fit to Z-transformed dispersion and turnover assumed Gaussian error, an identity link, and to account for heteroskedasticity residual variation (i.e., the sigma parameter) was modeled as a function of the environment and allowed to vary among studies. The modeled responses and means per group, as well as the 95% CI, are depicted together with the data where applicable. For each comparison and for each environment, we identified time series that exhibited an upward or downward trend if the 97.5% CI did not overlap with zero, and neutral otherwise. For Bayesian inference and estimates of uncertainty, we fit models using the Hamiltonian Monte Carlo (HMC) sampler Stan , which was coded using the brms package . We used weakly regularizing priors, and visual inspection of the HMC chains showed excellent convergence. All code for statistical analyses is available at https://github.com/sablowes/microbiome-disturbance .
Using Google Scholar and Web of Science search engines (a list of keywords is available as ), we collated bacterial studies from systems where an experimental disturbance was imposed, and 16S rRNA gene amplicon sequencing datasets were available. Specifically, we chose studies that (1) were sequenced in Illumina or IonTorrent platforms; (2) sequenced the V3–V4 regions of the 16S rRNA gene; (3) were published after 2014; (4) repeatedly sampled microbial communities following a discrete disturbance or environmental change; (5) included samples from before the disturbance (i.e., controls), at least one (replicated) sample within a week after disturbance, and at least one (replicated) sample within a month after disturbance; and, (6) included experimental triplicates (i.e., three samples per time point). Criteria 1–3 ensured that the sequencing techniques were comparable between studies, and reduced the biases associated with sampling different regions of the 16S rRNA gene . Importantly, downstream analyses adopted a synthetic framework (i.e., we reprocessed sequences using a single approach described below), and samples from different studies were not combined. We applied criteria 4–6 to examine variation in rates of compositional change across environments. Criterion 6 ensured that the variability of the microbiomes at each time point could be measured. We defined a disturbance causally, as a “discrete, rapid environmental change” . We excluded datasets for which raw sequencing data were not publicly available and stopped data collection in October 2020. In all, datasets from 29 studies matched our criteria , see Table S for all datasets). We grouped these time series into three environmental categories: aquatic, mammal-associated, and soil microbiomes (including rhizosphere microbiomes). To further explore the role of disturbance type on the observed phenomena, we categorized disturbances according to their effect on the community as previously done in macroecology . Categories included mortality-inducing treatments (e.g., heat, azoxystrobin, ciprofloxacin, mechanical removal), mortality-inducing treatments combined with a microbial invasion (e.g., cefuroxime and Clostridium difficile ), mortality-inducing treatments combined with nutrient additions (e.g., heat and fertilizer additions), drought, invasions (e.g., the addition of Pseudomonas or C. difficile ), metal pollution (e.g., cadmium additions), nutrient additions (nitrate, chitin, diesel), nutrient additions including potential invasions (e.g., the addition of wastewater, the addition of diesel and a bacterial consortium), and PAH contamination.
Raw 16S rRNA gene amplicon data and metadata were obtained from the NCBI Sequence Read Archives with the exception of two datasets, one of which came from another database, and the other was obtained directly from the authors (see Table S for accession numbers). We reprocessed sequences in R 3.4.3 using the dada2 package , and a conservative approach. To account for the different sequence qualities across datasets and to improve comparability in the reprocessed data, each dataset was inspected and reprocessed separately, and downstream statistical analyses accounted for between-study differences. Prior to processing, we visually inspected two samples per study with the plotQualityProfile to determine whether the reads had been merged prior to archiving, and to confirm that primers were not present. We only used forward reads because reverse reads were not available for all studies. Following inspection, we trimmed and truncated sequences on a study-by-study basis (see Table S for trimming and truncation lengths) to preserve a 90-bp segment, the minimum recommended in the Earth Microbiome Project protocols (and the maximum possible for studies that used Illumina HiSeq machines). We acknowledge that 90 bp is shorter than the length that is often used in amplicon sequencing studies and that longer segments would have detected higher microbial diversity; however, our aim was to compare diversity patterns across studies, for which short read lengths are suitable . Similar to downstream rarefaction, trimming all segments to the same length ensured a comparable degree of biodiversity detection across studies . We filtered, dereplicated, and chimera‐checked each read using standard workflow parameters . While we did not use taxonomic assignments in our analyses or compare amplicon sequence variants (ASVs, 100% sequence identity) across datasets, we assigned reads to ASVs with the SILVA v.132 training set to remove non-bacterial ASVs. Unassigned, bacterial ASVs (i.e., those classified as Bacteria) were preserved. Details about the percentage of reads lost at each step of sequence processing, per study, are included in Fig. S . As the samples included in these studies had a wide range of sequencing depths across samples (independent of the study environment), we randomly subsampled each sample to 1500 reads per sample to obtain a similar degree of biodiversity detection across studies. To ensure that our findings were not affected by observation depth, we additionally ran all analyses in parallel using the deepest possible observation depth (with a lower bound of 1500 reads per sample) for each study (Table S ). As our findings were consistent regardless of standardization (Fig. S ), we present only the results from the global rarefaction (i.e., 1500 reads per sample for all samples). To examine the completeness of each sample relative to the total richness in a community, we calculated sample completeness using the BetaC package . On average, our samples represented 0.96 ± 0.05 (mean ± sd) of the community. We removed any time points that had fewer than three experimental replicates for each time series. We coded time series so that time (days) ≥ 0 occurred after disturbance, and time < 0 denoted the pre-disturbance community.
To examine variation in diversity across environments we calculated metrics that quantify diversity within samples (richness), and variation in taxon composition between samples (turnover). We calculated richness and turnover metrics using the phyloseq package’s data structure . We calculated species richness as the number of unique ASVs per sample (Hill q = 0), and Inverse Simpson’s index (Hill q = 2 ). We used Bray–Curtis dissimilarity to quantify two aspects of compositional variation. First, to describe the compositional variation between samples collected at the same time point, we calculated dispersion as the pairwise Bray–Curtis dissimilarity between all combinations of experimental replicates for each time point within each time series. For studies that resampled the same experimental unit (e.g., host organism or microcosm) over time, we excluded pairwise comparisons between samples from the same experimental units. Second, to quantify how composition changed following disturbance, we calculated turnover using pairwise dissimilarities between all control samples (i.e., pre-disturbance) and all subsequent replicate samples at each time point following disturbance. Using this approach, communities that recover their pre-disturbance state will have a negative slope estimate through time, while communities that become increasingly different from the pre-disturbance community over time will have a positive slope estimate (Fig. ). Because compositional changes can be due to changes in richness alone, we used a null model to disentangle compositional changes from changes in richness. We randomly permuted abundance values within each sample 1000 times, preserving the number of taxa (i.e., richness) for each sample, and recalculated turnover and dispersion metrics for each matrix to derive a null expectation for each. For both metrics, Z -scores were calculated as [12pt]{minimal}
$$}}^{{}}-{ }^{{}}}{{ }^{{}}}$$ u observed - μ expected σ expected , where [12pt]{minimal}
$${ }^{{}}$$ μ expected is the mean of the resamples, and [12pt]{minimal}
$${ }^{{}}$$ σ expected is the standard deviation. Z-scores are a powerful method to explore dissimilarities as deviations from a null expectation , perform particularly well for long-tailed microbiome data, and are recommended over subtraction-based dissimilarity partitioning methods . Statistical analyses evaluated dissimilarity and Z -score values in parallel. Significant (95% credible interval) patterns observed in both dissimilarity and Z -score data were attributed to changes in community richness, while significant patterns observed only in the Z -score data were attributed to changes in the relative abundance of taxa within the community. We present models fit to the raw dissimilarity metrics (i.e., Bray–Curtis) in the main text, and report where they differed from analyses of the Z -scores, which are presented in full in Figs. S and S . All code for bioinformatics processing and null models is available at https://github.com/drcarrot/DisturbanceSynthesis .
We fit generalized linear models to assess how richness, dispersion, and turnover change in response to disturbances using Bayesian methods and the brms package , and detailed information about each model is provided in the “ ” section. We performed all analyses at the ASV level. To quantify the immediate response of richness and dispersion to disturbance, we used before-after analyses that compared data from prior to the disturbance to samples taken < 4 days post-disturbance; to determine whether responses differed between environments (i.e., aquatic, mammal, soil), we included an interaction between the before-after and environment categorical covariates. Five studies were excluded from the before-after analyses due to a lack of samples (Table S ). To quantify how richness and dispersion changed through time following disturbance, we fit models to data from the first 50 days post-disturbance only (i.e., pre-disturbance samples were not included). Finally, to examine how composition changed from pre- to post-disturbance, we fit models to turnover that quantified compositional changes between the pre-disturbance controls and samples taken in the first 50 days post-disturbance. To determine whether changes following disturbance differed between environments, all-time series models included an interaction between time and environment. Time (in days) was fit as a continuous covariate and was centered by subtracting the mean duration from all observations prior to modeling. We fit all models with the same, hierarchical grouping (or random-effects) structure: to account for methodological variation between studies, we included varying intercepts for each study in all models; and, because many studies included more than one disturbance type (e.g., ), we included varying slopes and intercepts for time series within studies (i.e., one time series per disturbance type). Models fit species richness (i.e., the before-after and time series models) assumed a negative-binomial error distribution and a log-link function. In addition to the parameters and the grouping structure described above, the shape parameter of the negative-binomial distribution (that estimates aggregation) was also allowed to vary among studies. Models fit raw values of dispersion and turnover assumed Beta error, a logit-link function, and the precision parameter was allowed to vary among studies. Models fit to Z-transformed dispersion and turnover assumed Gaussian error, an identity link, and to account for heteroskedasticity residual variation (i.e., the sigma parameter) was modeled as a function of the environment and allowed to vary among studies. The modeled responses and means per group, as well as the 95% CI, are depicted together with the data where applicable. For each comparison and for each environment, we identified time series that exhibited an upward or downward trend if the 97.5% CI did not overlap with zero, and neutral otherwise. For Bayesian inference and estimates of uncertainty, we fit models using the Hamiltonian Monte Carlo (HMC) sampler Stan , which was coded using the brms package . We used weakly regularizing priors, and visual inspection of the HMC chains showed excellent convergence. All code for statistical analyses is available at https://github.com/sablowes/microbiome-disturbance .
Our final dataset included 2588 samples in 86-time series from 29 studies (Table S ) belonging to soil micro- and mesocosms ( n = 49), seawater mesocosms ( n = 16), and mammalian microbiomes ( n = 21) that were sampled multiple times within 50 days after disturbance (Fig. a). Across all samples, we detected 56,480 ASVs. Sample completeness was highest in mammalian microbiomes (0.98 ± 0.02; mean ± sd), lowest and most variable in soil microbiomes (0.93 ± 0.06), and was significantly different between environments (ANOVA, F = 475.1, p < 0.001, Fig. b). Richness in disturbed and recovering microbiomes Prior to disturbance, mean richness was highest in soil microbiomes with 327 ASVs [95% CI 196–506], followed by aquatic 184 [111–281], and mammalian 86 [51–133] microbiomes (Fig. a). While all environments exhibited decreases in microbiome richness following disturbance, only the decrease in the mammalian microbiomes statistically differed from zero, and all mammalian time series ( n = 19 time series) exhibited a downward richness trend (Table ). This pattern was primarily driven by time series which employed disturbances that likely caused mortality, or those that introduced an invasion, or a combination of both (Fig. S ). In contrast, all aquatic time series ( n = 14) and most soil time series ( n = 20) with the exception of four exhibited neutral trends (Table ). On average, the post-disturbance richness in mammalian microbiomes was approximately 43% of that found pre-disturbance (Fig. a), and over time, richness increased consistently at a rate of approximately 2% (1–3%) per day (Fig. b), a phenomenon that was observed across disturbance types and was present in all mammal time series ( n = 19) except for one that exhibited neutral trends. In general, the mammalian microbiomes that lost the most richness after disturbance also recovered this richness most rapidly over the following 50 days (Fig. S ). In contrast, no overall patterns were observed in the richness in aquatic and soil time series, although they exhibited either neutral responses or ( n = 11 and n = 41 for aquatic and soil time series) or the continued loss of richness over time ( n = 5 and n = 6, respectively, Table S ). These results were consistent when alpha diversity recovery was assessed as inverse Simpson’s index (Fig. S ). Dispersion and turnover All microbial communities were under dispersed relative to the null expectation, and 97% of Z -scores were negative. All of the lowest Z -score values (< − 400) belonged to mouse microbiomes, for which we detected fewer than 30 ASVs. On average, dispersion did not change immediately after disturbance for any environment (Fig. a, Table S ). However, we found a decrease through time following the disturbance in dispersion values for mammalian microbiomes (Fig. b), though this pattern was not present in the Z -scores (Fig. S ), indicating reduced compositional variation was associated with a reduction in richness, rather than changes in relative abundances. The strongest responses were from microbiomes exposed to invasion ( n = 1), mortality ( n = 10), or a mixture of both ( n = 8, Fig. S ). Most mammal time series ( n = 13) exhibited a decreasing dispersion over time, while 7 exhibited neutral dynamics (Table ). Similarly, soil time series exhibited mostly decreasing ( n = 15) or neutral ( n = 31) dispersion dynamics, with only one-time series increasing in dispersion over time. In contrast, aquatic time series exhibited either neutral ( n = 11) or increasing ( n = 5) dispersion over time. We found environment-specific turnover between composition pre- and post-disturbance. On average, mammalian microbiomes exhibited negative turnover, and most time series ( n = 14) tended to recover toward their pre-disturbance composition (Fig. , Table ). This pattern was consistent across disturbance types and was strongest for microbiomes subjected to invasion ( n = 1), mortality ( n = 10), or a combination of both ( n = 8, Fig. S ). Importantly, negative turnover was not found when assessed with Z -scores (Fig. S ), indicating that recovery occurred through an increase in richness, not due to the recovery of relative abundances. In contrast, following disturbance, aquatic microbiomes exhibited positive turnover, tending away from their pre-disturbance controls over time. This pattern was present in all-time series ( n = 16), and was consistent whether raw values (Fig. ) or Z -scores were modeled (Fig. S ), indicating that changes in the identity and relative abundance of taxa, rather than simply changes in the number of taxa in the system were responsible for this drift away from a pre-disturbance composition. While all-time series followed this response regardless of the type of disturbance, PAH and metal-contaminated microbiomes ( n = 1 for each) exhibited the strongest response (Fig. S ). Notably, while no consistent responses were found in soil, most time series exhibited positive ( n = 16) or neutral ( n = 29) turnover, with only two-time series tending towards recovery (i.e., negative turnover). Finally, to examine the relationship between the immediate disturbance responses (i.e., the strength of the disturbance) and compositional changes over time subsequent to the disturbance, we plotted rates of temporal turnover as a function of the magnitude of the immediate (< 4 days after disturbance) changes in richness (Fig. ). This relationship was environment-dependent. Aquatic microbiomes predominantly exhibited no immediate richness responses to disturbance and positive turnover thereafter (i.e., composition moved away from pre-disturbance controls); mammalian microbiomes exhibited an immediate loss of richness and a negative turnover (i.e., recovery toward pre-disturbance composition); and soil microbiomes exhibited very weak or no responses in terms of both immediate richness responses and turnover following the disturbance (Fig. ). This pattern was consistent, but weaker when turnover Z -scores were modeled, especially for mammalian microbiomes (Fig. S ).
Prior to disturbance, mean richness was highest in soil microbiomes with 327 ASVs [95% CI 196–506], followed by aquatic 184 [111–281], and mammalian 86 [51–133] microbiomes (Fig. a). While all environments exhibited decreases in microbiome richness following disturbance, only the decrease in the mammalian microbiomes statistically differed from zero, and all mammalian time series ( n = 19 time series) exhibited a downward richness trend (Table ). This pattern was primarily driven by time series which employed disturbances that likely caused mortality, or those that introduced an invasion, or a combination of both (Fig. S ). In contrast, all aquatic time series ( n = 14) and most soil time series ( n = 20) with the exception of four exhibited neutral trends (Table ). On average, the post-disturbance richness in mammalian microbiomes was approximately 43% of that found pre-disturbance (Fig. a), and over time, richness increased consistently at a rate of approximately 2% (1–3%) per day (Fig. b), a phenomenon that was observed across disturbance types and was present in all mammal time series ( n = 19) except for one that exhibited neutral trends. In general, the mammalian microbiomes that lost the most richness after disturbance also recovered this richness most rapidly over the following 50 days (Fig. S ). In contrast, no overall patterns were observed in the richness in aquatic and soil time series, although they exhibited either neutral responses or ( n = 11 and n = 41 for aquatic and soil time series) or the continued loss of richness over time ( n = 5 and n = 6, respectively, Table S ). These results were consistent when alpha diversity recovery was assessed as inverse Simpson’s index (Fig. S ).
All microbial communities were under dispersed relative to the null expectation, and 97% of Z -scores were negative. All of the lowest Z -score values (< − 400) belonged to mouse microbiomes, for which we detected fewer than 30 ASVs. On average, dispersion did not change immediately after disturbance for any environment (Fig. a, Table S ). However, we found a decrease through time following the disturbance in dispersion values for mammalian microbiomes (Fig. b), though this pattern was not present in the Z -scores (Fig. S ), indicating reduced compositional variation was associated with a reduction in richness, rather than changes in relative abundances. The strongest responses were from microbiomes exposed to invasion ( n = 1), mortality ( n = 10), or a mixture of both ( n = 8, Fig. S ). Most mammal time series ( n = 13) exhibited a decreasing dispersion over time, while 7 exhibited neutral dynamics (Table ). Similarly, soil time series exhibited mostly decreasing ( n = 15) or neutral ( n = 31) dispersion dynamics, with only one-time series increasing in dispersion over time. In contrast, aquatic time series exhibited either neutral ( n = 11) or increasing ( n = 5) dispersion over time. We found environment-specific turnover between composition pre- and post-disturbance. On average, mammalian microbiomes exhibited negative turnover, and most time series ( n = 14) tended to recover toward their pre-disturbance composition (Fig. , Table ). This pattern was consistent across disturbance types and was strongest for microbiomes subjected to invasion ( n = 1), mortality ( n = 10), or a combination of both ( n = 8, Fig. S ). Importantly, negative turnover was not found when assessed with Z -scores (Fig. S ), indicating that recovery occurred through an increase in richness, not due to the recovery of relative abundances. In contrast, following disturbance, aquatic microbiomes exhibited positive turnover, tending away from their pre-disturbance controls over time. This pattern was present in all-time series ( n = 16), and was consistent whether raw values (Fig. ) or Z -scores were modeled (Fig. S ), indicating that changes in the identity and relative abundance of taxa, rather than simply changes in the number of taxa in the system were responsible for this drift away from a pre-disturbance composition. While all-time series followed this response regardless of the type of disturbance, PAH and metal-contaminated microbiomes ( n = 1 for each) exhibited the strongest response (Fig. S ). Notably, while no consistent responses were found in soil, most time series exhibited positive ( n = 16) or neutral ( n = 29) turnover, with only two-time series tending towards recovery (i.e., negative turnover). Finally, to examine the relationship between the immediate disturbance responses (i.e., the strength of the disturbance) and compositional changes over time subsequent to the disturbance, we plotted rates of temporal turnover as a function of the magnitude of the immediate (< 4 days after disturbance) changes in richness (Fig. ). This relationship was environment-dependent. Aquatic microbiomes predominantly exhibited no immediate richness responses to disturbance and positive turnover thereafter (i.e., composition moved away from pre-disturbance controls); mammalian microbiomes exhibited an immediate loss of richness and a negative turnover (i.e., recovery toward pre-disturbance composition); and soil microbiomes exhibited very weak or no responses in terms of both immediate richness responses and turnover following the disturbance (Fig. ). This pattern was consistent, but weaker when turnover Z -scores were modeled, especially for mammalian microbiomes (Fig. S ).
We synthesized metabarcoding data to show how microbial community responses to disturbance vary across three environments at time scales that are relevant to microbiome turnover rates and bacterial life histories . We focused on the richness, dispersion, and turnover of microbiomes recovering from 86 different disturbances in three different environments, and further partitioned the latter two into shifts caused by changes in richness or in the relative distribution of taxa in order to shed light on the ecological processes driving microbial recovery. We found environment-specific responses: aquatic microbiomes tended away from their pre-disturbance composition following disturbance, while mammalian microbiomes tended to recover towards their pre-disturbance state. Soil microbiomes exhibited no clear patterns. Furthermore, we found no indication that disturbances increased dispersion in any environment, in contrast with the Anna Karenina Principle (AKP), and instead found the opposite pattern, especially in mammalian microbiomes. These findings highlight consistent response patterns within environments and consistent differences between environments. Contrary to our expectation, we only found modest losses in richness following disturbance. On average, only mammalian microbiomes experienced statistically significant richness loss. This loss likely underscores the efficacy of antibiotics, which were used in 76% of mammalian microbiome time series, often in combination with an invader such as C. difficile Disturbances in soil and aquatic environments in our study were dominated by nutrient additions (e.g., inorganic nitrogen and phosphorus inputs in aquatic microbiomes, or humic acid amendments in soil, ), which are not directly expected to decrease richness. Surprisingly, we did not record any instance of a nutrient addition increasing richness in these systems, but this may be because all the experimental systems selected in the meta-analysis were partially closed to dispersal from the local environment (e.g., microcosms and mesocosm). Despite their strong initial response to disturbance, mammalian microbiomes exhibited a clear and rapid trend toward recovery over time. Our null model analyses showed that richness changes were largely responsible for the decreases in community dispersion (i.e., more similar taxa composition) and negative turnover following the disturbance, suggesting that in mammals, disturbance generally resulted in the loss of specific taxa followed by a rapid recolonization by these taxa. Given the absence of this pattern in soil or aquatic microbiota, our findings suggest role of the host in modulating and perhaps accelerating the recovery of the resident microbiota. Host behaviors such as eating and socializing may function as mechanisms of active dispersal, and together with the immune system may act as a selective pressure , resulting in recovered microbiomes that resemble the undisturbed communities. Several studies have demonstrated the high variability in host responses to disturbance and the dependence of these responses on the environment ; however, by comparing these responses with those found in other environments, we found that host-associated microbiomes exhibited the strongest and most consistent responses to disturbance. Surprisingly, aquatic microbiomes tended to become more dissimilar from their pre-disturbance compositions over time. This pattern may be due to the high connectivity and constant mixing of resources (i.e., nutrients) in aquatic microbiomes . Due to the different experimental designs included in this synthesis, it was not possible to determine whether the communities were generally drifting towards a specific composition (i.e., an alternative stable state ). In contrast, in the highly heterogeneous soil environment, microbiomes did not exhibit strong responses to disturbance. Nevertheless, similarities with the other environments were present: in all environments, we recorded no instances of soil microbiomes increasing in richness immediately following disturbance. Like in aquatic microbiomes, we also found no instances of soil microbiomes recovering their richness over time following disturbance, or of dispersion decreasing immediately after disturbance. We also found that a substantial portion of the soil time series tended away from their pre-disturbance state. As in mammalian microbiomes, we found several instances of microbiome turnover tending towards decreased dispersion over time. In the above cases, most time series in soil exhibited neutral responses (i.e., no detectable trend), however. This pattern could be due to the extreme diversity and heterogeneity found in this system , or due to technical limitations of this study. Nevertheless, standardizing the data to the maximum depth for each time series yielded identical results, suggesting that higher resolution may be necessary to capture community recovery in soils and disentangle the role of rare taxa from stochasticity. The conservative approaches we employed for the selection, processing, and analysis of the data aimed to facilitate cross-study comparisons, but limited the contribution of rare taxa (i.e., those with low relative abundance) in our analyses of diversity change. Recognizing these limitations, we focused on the dominant taxa, using abundance-weighted metrics (Bray–Curtis). This likely impacted our analysis of soil most strongly, as soil microbiomes had the highest overall richness and lowest sample completeness estimates, and rare taxa are important sources of variation in soil microbiomes . It is likely that our sample size ( n = 86 time series) and statistical methods (applied to standardize and enable direct comparison across habitats) have together provided a broader analysis than was previously achieved from habitat-specific studies. We found no indication that dispersion increases immediately or over time following disturbance, in any environment, in direct contrast with the AKP. The AKP proposes that dysbiotic microbiomes exhibit an increased host-to-host variation . Importantly, our synthesis did not include measures of dysbiosis, as these were not consistently available and the definition of dysbiosis can vary widely. Instead, we compared the microbiomes to their pre-disturbance state and found that disturbance does not consistently increase dispersion, at least in the dominant portion of the community. While changes in dispersion are often reported in the microbial literature , dispersion is generally measured as pairwise Bray–Curtis dissimilarity among experimental or field replicates, and confounds changes in richness with compositional changes . We found that, in general, when dispersion decreased (i.e., in mammals), it was due to decreasing species richness in the community, not due to changes in the relative abundance of community members. We also found that in the absence of a host, soil and aquatic microbiomes tended to shift away from their pre-disturbance conformation, suggesting that environmental microbiomes are less prone to recovery than mammalian ones. Taken together, this synthesis sheds light on similarities across environments and highlights the role of the host in microbiome recovery.
Our work highlights the need to reconsider the definition of disturbance in the microbiome . We included a wide range of disturbances, and categorized them according to a framework that considered the direct effect of the disturbance on the microbial community and that largely echoes similar categorizations in macroecology (e.g., ). For example, when sterilized, organic amendments represent a novel source of resources, but when applied unsterilized, they also potentially include an invasive community, a scenario that deviates from the classic invasion literature . Furthermore, selective disturbances (e.g., antibiotics) remove similar taxa across experimental replicates, resulting in the homogenization of microbiomes, and decreasing dispersion . In contrast, disturbances that affect taxa randomly could lead to the microbiomes becoming more dissimilar, increasing the influence of ecological drift, and consequently, compositional dispersion. The duration of disturbances also varied, especially relative to bacterial life histories and ecologies . Pulse disturbances which last multiple days may encompass multiple life cycles for many microbial taxa. Similarly, disturbances which may be considered long-term changes for macro-organisms (i.e., oil pollution), may represent short-term resource pulses for oil-degrading bacteria. In a world in which microbiomes are exposed to increasing disturbance pressures, developing a set of descriptors for disturbances based on their effect on the microbiome’s niche space and competitive landscape is urgently needed. Our study reconciles several hypotheses that have been proposed for microbiomes, with different hypotheses supported in different environments. First, we find strong support for the tendency to drift away from the pre-disturbance state in aquatic systems, and mild support in soil systems . Second, we find a strong tendency towards recovery in mammalian microbiomes, characterized by the loss of specific taxa during disturbance and their return thereafter. Third, we find little general evidence for changes in compositional dispersion (after accounting for changes in richness) following disturbance, in contrast to the AKP. Our work focused on community-level responses to disturbances across microbiomes, but did not delve into the responses of specific taxa due to the differences in sequencing techniques (and especially primer choice among studies . Future work may focus on smaller subsets of data that use consistent techniques to identify responsive taxa. Our results highlight how richness alone does not capture complex microbiome dynamics, similar to findings in broader ecology . Further work is needed to distinguish the consequences of selective versus non-selective disturbances (e.g., those that impact certain populations versus those that indiscriminately impact all populations) on microbiome responses. Overall, this work provides a new empirical perspective on the dynamics and generalities of microbiome disturbance responses that are supported by directly comparable metrics, equivalent temporal scales among datasets, and a consistent modeling approach. It suggests that with comparisons of standardized diversity measures, responses that were previously believed to be applicable to all microbiomes (i.e., the AKP) are not present and that the environment (especially the host) is a key determinant of the microbiome of both the response to, and recovery from, disturbance.
Additional file 1: Supplementary methods . Literature Search and model descriptions. Table S1. Accession numbers and links to all sequences reused in this work and their processing parameters. Table S2. Slope estimates for models comparing immediate changes in dispersion following disturbance, calculated on Bray–Curtis values and null model outputs. Figure S1. Proportion of reads preserved after quality filtering (a), chimera checking (b), and selection of bacterial reads (c). Figure S2. Models fit to data standardized across studies and to data standardized within studies yield very similar parameter estimates. Each panel shows the fixed effect estimates for models fit to (a) richness before-after disturbance, (b) richness change through time following disturbance, (c) dispersion before-after disturbance, (d) dispersion (z-score) before-after disturbance, (e) dispersion change through time following disturbance, (f) dispersion (z-score) change through time following disturbance, (g) turnover change through time following disturbance, and (h) turnover (z-score) change through time following disturbance. Figure S3. Posterior distributions of the immediate response in richness to disturbance, separated by disturbance type and microbial realm. Figure S4. The immediate effect of a disturbance on richness was only related to the rate of recovery of richness in mammals. Figure S5. Slope and interval estimate of richness (Hill q 0 , purple) and inverse Simpson’s index (Hill q 2 , blue) immediately following disturbance (a) and over time (b). Figure S6. The effect of disturbance on microbiome dispersion, immediately (< 4 days) after disturbance (a), and over 50 days of recovery (b). Figure S7. Posterior distribution of temporal response of dispersion to disturbance, separated by disturbance type and microbial realm. Figure S8. Posterior distribution of temporal response of turnover to disturbance, separated by disturbance type and microbial realm. Figure S9. The effect of disturbance on turnover. Figure S10. Relationships between the immediate effect of a disturbance on richness and a microbiome’s long-term recovery of composition vary among environments.
|
Technology of Combined Identification of Macrophages and Collagen Fibers in Liver Samples | a7e62fca-0050-4dea-b71a-20ede9991a8b | 11618527 | Anatomy[mh] | Kupffer cells are resident liver macrophages, which are in close contact with sinusoidal capillaries of the liver and play an important role in the mononuclear phagocyte system of the body. These cells perform various functions: phagocytosis of cell debris and toxins entering the liver via the portal vein; involvement in lipid metabolism (including cholesterol) and proteins; provision of immunosurveillance; maintenance and regulation of immune tolerance of the body . Besides, one of the important functions of the Kupffer cells is their interaction with fibroblasts and myofibroblasts, the cells responsible for synthesis and secretion of collagen precursors. Recent investigations have shown that proimflammatory activation of macrophages in various tissues and organs induce the release of interleukins IL-4 and IL-13 and profibrotic factors (TGF-β1, FGF-2, PDGF) stimulating epithelial-mesenchymal transformation and deposition of extracellular matrix production. This process results in remodeling of the extracellular matrix of the connective tissue and pathologic angiogenesis, which, in their turn, force persistent hyperactivation of fibroblasts and myofibroblasts . Thus, one of the functions of macrophages in organs and tissues (Kupffer cells in the liver, in particular) is profibrogenic regulation. The development of new universal approaches, which combine the advantages of classic histological methods of staining and immunohistochemical reactions, is of great importance for histological practice. Simultaneous exploration of the functional condition of the Kupffer cells and connective tissue provides the possibility to study the mechanisms of pathological changes developing in the liver. The most commonly applied methods for exploring the connective tissue on preparations are histological methods of Van Gieson’s staining and Mallory and Masson trichrome staining with aniline blue . These methods can be used to stain the liver tissue both in normal and pathological conditions. However, the Kupffer cells cannot be detected by the classical histological staining techniques. It has been previously shown that it is convenient to use immunohistological reaction to Iba-1 microglial protein. Therefore, to identify concurrently the connective tissue and Kupffer cells it is appropriate to combine histological staining of collagen fibers with aniline blue and immunohistochemical identification of the Kupffer cells with reaction to the Iba-1 protein. It is worth mentioning that possible changes in the tissue properties due to stain absorption during successive material treatment for the two variants of the investigation make the result of this approach unobvious. In this connection, the aim of our study was to assess the possibility of using a combined approach to concurrent detection of Kupffer cells and a fibrous component of the connective tissue on the liver samples. The study was performed on the liver samples of adult (4–6 months) Wistar (n=3) and SHR (n=3) rats. The rats were delivered from the nurseries for laboratory animals “Rappolovo” (Leningrad region, Russia) and “Pushchino” (Moscow region, Russia), were housed in vivarium at room temperature, under standard conditions, with a free access to food and water. Housing and scarification of animals complied with the ethical principles of the European Convention for the Protection of Vertebrate Animals used for Experimental and Other Scientific Purposes (Strasbourg, 2006) and Order No.199n “On the Approval of the Rules of Good Laboratory Practice” (Russia, 2016). During the investigations, all international principles of using animals were observed. The study was approved by the local Ethical Committee of the Institute of Experimental Medicine (Saint Petersburg, Russia). The left liver lobe was used for the investigation. The liver samples were fixed in zink-ethanol-formaldehyde for 18–24 h at room temperature. The fixed material was embedded into paraffin according to the standard protocol and blocks containing one liver lobe were fabricated. The paraffin blocks were cut into 5 μm sections using rotation Microm HM 325 microtome (Thermo Fisher Scientific, USA), which were mounted on the HistoBond®+M adhesive microscope slides (Marienfeld, Germany). Further, standard procedures of dewaxing and dehydration were conducted. Monoclonal rabbit antibodies to Iba-1 (Clone JM36-62; ET1705-78; HuaBio, China) were used to detect resident liver macrophages. UltraVision Quanto Detection System HRP DAB (Thermo Fisher Scientific, USA) was employed as a secondary reagent for primary rabbit antibodies. The slices were stained with a 2% aqueous solution of aniline blue (Unisource Chemicals Pvt. Ltd., India) being a component of the Mallori and Masson trichrome stain acidified with glacial acetic acid. To stain the slices, first, a mordant (phosphomolybdic acid) was applied followed by a freshly prepared solution of aniline blue. After dehydration in isopropanol and bleaching in orthoxylol, the obtained preparations were placed into the permanent Cytoseal 60 mounting medium (Richard-Allan Scientific, USA) and analyzed using a light Axio Scope.A1 microscope (Carl Zeiss, Germany). Photographs of the histological preparations were taken using Zeiss Axiocam 105 color camera (A-Plan 20×/0.45; 40×/0.65 objectives) and ZEN 3 program (Carl Zeiss, Germany). The obtained images were morphometrically analyzed using ImageJ2 program with FIJI distribution . To assess quantitatively interlobular connective tissue and the distribution density of Iba-1 positive elements, the images were presegmented by 4 colors (red, yellow, blue, white) using IJ-Plugins Toolkit and k-means algorithm. As the result of segmentation, binarized images corresponding to the examined structures have been obtained. The total area of the Iba-1 immunostained structures and collagen fibers was evaluated with the help of the standard ImageJ2 functions such as color histogram, analysis of particles, and measurement. A morphometric grid with a specified point density (11×11), which was applied to the image separately using the GIMP graphics editor, was also employed . To quantitatively evaluate the density of Iba-1 positive element distribution associated with interlobular connective tissue, regions of interest were preselected with the help of the standard ImageJ2 function “region of interest”. Next, by means of IJ-Plugins k-means clustering , images were segmented by 3 colors (RGB). The total area of the Iba-1 immunostained structures was estimated using the above mentioned ImageJ2 functions for morphometric analysis. The measured area of the objects in the image was expressed in square micrometers and in percentages. In the course of the preliminary research, we have assessed the possibility of staining the collagen fibers with aniline blue after setting up the immunohistochemical reaction to the Iba-1 protein considering the previously developed protocol which requires heat-induced epitope retrieval (HIER). The test results have shown that after HIER, the collagen fibers were detected non-selectively, and in this connection, the protocol has been modified. Testing different modes of primary reagent incubation resulted in exclusion of the HIER stage, reduction of the period of primary reagent incubation (from 3 to 1 day), and elevation of the incubation temperature (from 27.5 to 35°C). This version of the protocol allowed us to achieve optimal results of immunohistochemical reaction. As the result of the reaction to the Iba-1 protein, multiple Iba-1 immunopositive structures had morphological features that matched to the Kupffer cells have been identified in all examined samples. No background staining was observed. The detected cells were morphologically similar, clear and mostly uniform staining of cytoplasma was noted. In some cases, the site of nucleus location was seen. Presence of projections, well visualized due to intensive staining, was characteristic for the majority of Kupffer cells. In all examined samples, these projections contacted with vascular endothelial cells, hepatocytes and other connective cells in the region of periportal laminar boundary layer. Exclusion of the HIER caused no negative effect on the detection of macrophages, reduced the possibility of nonspecific staining and improved the preservation of the liver tissue samples in the process of section treatment. At the same time, the elevation of incubation temperature allowed us to decrease the time of holding the sections in the solution of primary antibodies. Visually, the immunohistochemical reaction to the Iba-1 protein in all the examined samples was highly intensive and did not prevent selective staining of the collagen with aniline blue. Staining with aniline blue in all liver samples of Wistar and SHR rats was selective, uniform, and clear and allowed for differentiation of the connective tissue in all sections. Treatment of the slices with phosphomolybdic acid and staining with aniline blue after immunohistochemical reaction to the Iba-1 did not influence negatively the preservation of the product of the DAB chromogen reaction. Reduction in the staining intensity of the immunohistochemical reaction product or washing it out from the sections has not been noted. The combined staining method allowed hereafter for estimating morphometrically the area of immunopositive structures and the area occupied by the collagen fibers in the ocular view . The image, presented in was used as an example for quantitative analysis. Thus, the area, occupied by the collagen fibers, was calculated using a morphometric grid. The estimated area was 11,617.24 μm 2 (20.66% of the total area of the image). The total area of the Iba-1 immunostained structures was calculated on the basis of binarized image using a color histogram and was equal to 2330.08 μm 2 (4.15% of the total area of the image). The macrophages associated with interlobular connective tissue were automatically segregated using ImageJ2 k-means clustering plugin . This plugin enabled us selectively estimate the total area of macrophages of interlobular connective tissue and the Kupffer cells and also to determine the number of cells and cell fragments based on the color image. The total area of macrophages of the interlobular connective tissue was 1013.61 μm 2 (1.80%), and the number of detected cell fragments — 76. The total area of the Kupffer cells was 1316.47 μm 2 (2.34%), the total number of the cells and their fragments — 16. Fibrosis and activation of the immune system cells (the resident liver macrophages, Kupffer cells, in particular) accompany the majority of chronic liver diseases. In the diagnostic practice, it is often possible to establish the form of fibrosis only by the results of histological investigations . This investigation is necessary for the development of the biological models of fibrosis , at the preclinical stage of developing new medicinal preparations , and during clinical trials . In the present investigation, we have optimized the stages of setting up the immunohistochemical reaction in the previously proposed protocol for Kupffer cell detection using antibodies to microglial Iba-1 marker , which allowed for the application of aniline blue for staining histological sections. To enhance the specificity of collagen fiber staining after immunohistochemical investigation and preservation of tinctorial properties of the tissue, the possibility to remove the HIER was considered. The HIER procedure is used rather frequently in immunohistochemical investigations, since it enhances the sensitivity of the method , however, direct tissue heating may change the tinctorial properties of the examined tissue, which distorts the results of the following histological staining. Thus, it has been shown that denaturation of polypeptide chains and breaking of the bonds between them occur during heating process. Collagen denaturation is a multistage process accompanied by the impairment of the specific configuration of the glycine, proline, and alanine molecules . Exclusion of antigen retrieval stage in the presented technique of immunohisochemical staining allowed us to avoid denaturation of the collagen on the sections, which provide the possibility to study tissue macrophages and detect connective tissue fibers within the limits of one section. It gives the researcher the tools for exploring their mutual arrangement and a more precise assessment of the functional state of the organ. When immunohistochemical methods of staining are used for identification of collagen fibers, non-specific staining of the cell elements of the examined tissue sample is often possible as well as complication of the process of reaction setting-up requiring the application of two chromogens. Collagen is not a conservative protein: the diversity of its structural variants in different species of animals is thought to be caused by the variability of amino acid sequence and the collagen type . In this connection, the selection of primary antibodies for each species and examined collagen type is labour-consuming and expensive. On the contrary, the classical histological methods of staining intercellular substance of the connective tissue relative to the immunohistochemical ones have their advantages and may appear to be more suitable for the researcher. Owing to the versatility in the detection of various types of collagens and high affinity, the aniline blue in combination with pretreatment with phosphomolybdic or phosphotungstic acid is often used for collagen investigations in different organs and tissues . All these factors determined the choice of anilne blue a histochemical stain for the present method, which made it possible to detect specifically collagen fibers of the connective tissue. It has been established that our method using this stain is suitable for the combined application with immunohistochemical techniques for detecting macrophages and subsequent morphometric analysis. Optimization of the developed protocol for detecting Kupffer cells with antibodies to the Iba-1 microglial marker allows for simultaneous identification of the resident liver macrophages and collagen fibers without heat-induced antigen retrieval. The presented method of staining provides the possibility to perform effectively the morphometric analysis including binarization, color image segmentation, determining the area of the objects and their number, calculating the structure areas using a morphometric grid. |
SEOM-GEICO Clinical Guidelines on cervical cancer (2023) | 38467c1d-72fb-4bcc-8577-b4ec792b2d64 | 11466906 | Internal Medicine[mh] | Cervical cancer (CC) is the fourth most common cancer among women globally. In 2022, there were 661,021 new cases diagnosed worldwide: 61.072 in Europe and 1.679 in Spain . Globally, a total of 350,000 deaths were reported in 2022. Approximately 90% of all new cases and deaths reported worldwide occurred in low- and middle-income countries. The 5-year relative survival for women diagnosed with CC in 2013 and 2019 was 67.2% . The variation in CC rates across different geographic regions can be attributed to disparities in the prevalence of human papillomavirus (HPV) infection, a major risk factor for CC, as well as to differences in screening availability and limited access to vaccination in transitioning countries . HPV, and the oncogenic subtypes HPV16 and 18 in particular, is detected in around 99% of cervical tumors. Prophylactic administration of the HPV vaccine to females aged 9 through 12 has proven to be an effective measure in preventing HPV infection and related diseases. As a result, several countries have implemented HPV vaccination programs [II, A] . On the other hand, advances in secondary prevention, with the introduction of highly sensitive HPV DNA testing, has improved the effectiveness of traditional Papanicolaou cytology in screening programs. This development has improved secondary prevention methods intended to diagnose CC at an early stage and prevent its progression [II, A] . This guideline is based on a systematic review of relevant published studies with the consensus of ten treatment expert oncologists from GEICO (the Spanish Gynaecological Cancer Research Group), SEOM (the Spanish Society of Medical Oncology), and an external review panel of two experts designated by SEOM. The Infectious Diseases Society of America-US Public Health Service Grading System for Ranking Recommendations in Clinical Guidelines has been used to assign levels of evidence and grades of recommendation (Table ) . Early CC is frequently asymptomatic, underscoring the importance of screening. Abnormal cervical cytology or a positive high-risk HPV test should prompt the performance of colposcopy and biopsy, or excisional procedures such as loop electrosurgical excision and conization. Sometimes, incidentally visible lesions are discovered upon pelvic examination. Carcinomas can be exophytic, growing out of the surface, or endophytic with stromal infiltration and minimal surface growth. If a gross palpable lesion is present, diagnosis is based on biopsy. If a thorough pelvic examination cannot be carried out or there is uncertainty regarding vaginal/parametrial involvement, it should preferably be conducted under anesthesia . Locally advanced CC (LACC) may cause abnormal vaginal bleeding or discharge, pelvic pain, and dyspareunia. These symptoms are non-specific and may be mistaken for vaginitis or cervicitis. Some patients present with pelvic or lower back pain, which may radiate along the posterior side of the lower extremities. Bowel or urinary symptoms, such as pressure-related complaints, hematuria, hematochezia, or the passage of urine or stool through the vagina, are uncommon and point toward advanced disease . The World Health Organization (WHO) recognizes three categories of epithelial tumors of the cervix: squamous, glandular and other epithelial tumors, along with mixed epithelial, mesenchymal tumors and germ cell tumors . Squamous cell carcinoma (SCC) accounts for approximately 80% of all CC, while adenocarcinoma (ADC) accounts for some 20% . Historically, the development of all CC has been regarded as being associated with HPV infection (HPV-A). Nevertheless, it has recently been recognized that a significant proportion of cervical ADC are HPV-independent (HPV-I) . HPV status is both a prognostic and predictive factor. HPV-A tumors entail better prognosis and better response to treatment compared with HPV-I tumors . Therefore, the latest WHO classification of lower genital tract tumors in 2020 categorizes CC into HPV-A and HPV-I . Hight-risk HPV genotypes cause the vast majority (> 95%) of SCC. Twelve HPV types are classified by WHO as oncogenic: 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59. However, two types (16 and 18) alone are responsible for 70% of all SCC. The HPV viral oncoproteins E6 and E7 inactivate p53 and RB1, respectively. This inactivation is associated with the integration of HPV into the host genome, resulting in genomic instability and the accumulation of somatic mutations. Several factors have been linked to an increased risk of HPV persistence and progression, including immunosuppression (particularly due to human immunodeficiency virus), multiparity, smoking, and the use of oral contraceptives . Squamous cell carcinoma These tumors arise in dividing epithelial cells in high-grade squamous intraepithelial lesions (HSIL), a so-called transforming infection. The progression of high-grade lesions to SCC requires the accumulation of additional, yet incompletely understood, genetic and epigenetic alterations, a process that may take up to 20–30 years. While HPV analysis is not necessary for diagnosis, p16 immunoreactivity can serve as a surrogate marker for high-risk HPV infection . More than 70% of HPV-A SCC exhibit genomic alterations in either one or both of the PI3K/MAPK and TGF-β signaling pathways. Genes such as ERBB3 (HER3), CASP8, HLA-A, SHKBP1, and TGFBR2 have been reported as significantly mutated . Adenocarcinoma ADC encompasses a heterogeneous group of tumors. Most are HPV-A (typically types 18, 16, and 45), although around 10–15% are HPV-I . The usual type accounts for about 75% of all ADCs, while the mucinous type represents some 10%. HPV-A ADCs tend to have low levels of copy-number alterations and low scores for epithelial-mesenchymal transition. KRAS mutations are common . All forms of invasive HPV-A ADCs can exhibit either destructive or non-destructive (ADC in situ) growth patterns. This classification has revealed associations between tumor invasive patterns and risk of nodal metastases, recurrence, and survival . Other histologies Rare cervical cancer histologies include adenosquamous carcinoma, neuroendocrine tumors (small cell and large cell neuroendocrine carcinoma), rhabdomyosarcoma, primary cervical lymphoma, and cervical sarcoma. Accurate histological identification using specific markers is essential for optimal patient management. Predictive biomarkers in cervical cancer Programmed Death-Ligand 1 (PD-L1) expression is a biomarker that predicts benefit from immune checkpoint inhibitors in patients with cervical cancer. Additionally, PD-L1 expression is more prevalent in squamous cell carcinomas compared to adenocarcinomas. Recommendations for patients with recurrent, progressive, or metastatic disease : PD-L1 Testing : PD-L1 expression testing is recommended in patients with recurrent, progressive, or metastatic cervical cancer. HER-2 Immunohistochemistry (IHC) Testing: should be conducted to identify patients who may benefit from HER-2 targeted therapies. Mismatch Repair (MMR) Testing : MMR status can be evaluated using IHC. Next-Generation Sequencing (NGS) : may be contemplated to assess microsatellite instability (MSI) and tumor mutational burden (TMB), which can provide additional insights into potential therapeutic options. These tumors arise in dividing epithelial cells in high-grade squamous intraepithelial lesions (HSIL), a so-called transforming infection. The progression of high-grade lesions to SCC requires the accumulation of additional, yet incompletely understood, genetic and epigenetic alterations, a process that may take up to 20–30 years. While HPV analysis is not necessary for diagnosis, p16 immunoreactivity can serve as a surrogate marker for high-risk HPV infection . More than 70% of HPV-A SCC exhibit genomic alterations in either one or both of the PI3K/MAPK and TGF-β signaling pathways. Genes such as ERBB3 (HER3), CASP8, HLA-A, SHKBP1, and TGFBR2 have been reported as significantly mutated . ADC encompasses a heterogeneous group of tumors. Most are HPV-A (typically types 18, 16, and 45), although around 10–15% are HPV-I . The usual type accounts for about 75% of all ADCs, while the mucinous type represents some 10%. HPV-A ADCs tend to have low levels of copy-number alterations and low scores for epithelial-mesenchymal transition. KRAS mutations are common . All forms of invasive HPV-A ADCs can exhibit either destructive or non-destructive (ADC in situ) growth patterns. This classification has revealed associations between tumor invasive patterns and risk of nodal metastases, recurrence, and survival . Rare cervical cancer histologies include adenosquamous carcinoma, neuroendocrine tumors (small cell and large cell neuroendocrine carcinoma), rhabdomyosarcoma, primary cervical lymphoma, and cervical sarcoma. Accurate histological identification using specific markers is essential for optimal patient management. Programmed Death-Ligand 1 (PD-L1) expression is a biomarker that predicts benefit from immune checkpoint inhibitors in patients with cervical cancer. Additionally, PD-L1 expression is more prevalent in squamous cell carcinomas compared to adenocarcinomas. Recommendations for patients with recurrent, progressive, or metastatic disease : PD-L1 Testing : PD-L1 expression testing is recommended in patients with recurrent, progressive, or metastatic cervical cancer. HER-2 Immunohistochemistry (IHC) Testing: should be conducted to identify patients who may benefit from HER-2 targeted therapies. Mismatch Repair (MMR) Testing : MMR status can be evaluated using IHC. Next-Generation Sequencing (NGS) : may be contemplated to assess microsatellite instability (MSI) and tumor mutational burden (TMB), which can provide additional insights into potential therapeutic options. Since the beginning of the FIGO (The International Federation of Gynecology and Obstetrics) staging system, physical examination has been the primary tool for staging purposes. However, the latest FIGO update in 2018 incorporates imaging and pathology findings to improve the prognostic correlation and better tailor treatment (Table ). Recommended radiological imaging include pelvic magnetic resonance imaging (MRI) to evaluate local disease extension (preferred for FIGO stage IB1–IB3). Additionally, positron emission tomography/computed tomography (PET/CT) in early stages with suspicious lymph nodes (LN) or locally advanced tumors (EIB3 and higher) is recommended to assess nodal and distant disease. If PET/CT is not available, chest and abdominal CT can be used instead [II, B] . Cystoscopy and proctoscopy are only recommended if bladder or rectal invasion is suspected [IV, D] . Sentinel lymph node (SLN) mapping is especially relevant for staging early-stage cervical cancer (FIGO stages IA1 with lymphovascular space invasion, IA2, and IB1). SLNs should undergo ultrastaging to detect low-volume metastasis; non-sentinel nodes do not require ultrastaging. Para-aortic lymph nodes (PALN) evaluation has been the object of debate in recent years. PALN involvement is closely related to pelvic LN metastasis and tumors > 2 cm. Surgical staging versus PET/CT for patients with no suspicious radiological pelvic LN invasion has been evaluated, given that it can modify the extension of the radiotherapy field. Most evidence is retrospective , and some randomized trials were prematurely closed or patients with suspicious LN were included . These studies showed that surgery identified more PALN metastases, but without a clear benefit in OS compared with PET/CT staging. A randomized trial has been recently initiated, designed to demonstrate whether para-aortic lymphadenectomy followed by tailored chemoradiation improves results compared to patients staged with FDG-PET/CT only followed by chemoradiation . Therefore, PALN dissection may be considered to reduce the risk of undetected occult metastases when imaging shows no PALN involvement [II, B] . Tumor risk assessment includes several factors including tumor size, stage, depth of tumor invasion, LN status, lymphovascular space invasion (LVSI), and histological subtype . These factors have been included in trials to indiviualize the best adjuvant treatment. The “Sedlis Criteria” (GOG-092 trial) identify intermediate-risk factors: deep stromal invasion (> 1/3), lymphovascular space involvement, or tumor size > 4 cm . The GOG-109 trial identified high-risk factors: positive LN, positive margins, and/or microscopic parametrial involvement . According to the SEER database 2022, the 5-year survival rates are 91% for early stages, 60% for locally advanced stages, and 19% for metastatic cases. Kristensen et al. reported that the 5-year survival rate was better for patients with smaller tumors (94.8% if < 2 cm and 79.1% if 2–3.9 cm) . Five-year survival is < 50% in patients with pelvic LN metastasis and < 20–30% in those with PALN metastasis. Early-stage disease T1a1 disease: conization with negative margins should be considered [IV, C]. Sentinel lymph node (SLN) biopsy is worth considering in LVSI-positive cases [IV, B] . T1a2 disease: conization with clear margins or a simple hysterectomy (HT) is deemed sufficient [IV, B] . While SLN biopsy can be contemplated for LVSI-negative patients, it is recommended for those with LVSI-positive cases [IV, B] . Management of T1b1, T1b2, and T2a1 disease For patients diagnosed with stage IB1, IB2, or IIA1, surgery stands as the most suitable choice [I, A] . The initial surgical step should involve LN staging [IV, A] . SLN mapping and any suspicious nodes should be removed intraoperatively [III, A] . If any LN involvement is detected intraoperatively, refrain from further surgical procedures, opting instead for definitive concurrent chemoradiotherapy (CRT) [III, A] . In these cases, consider para-aortic lymph node dissection (PALND) for staging purposes [IV, C] . If both sides reveal negative SLN in pelvic level I, LN dissection can be confined to level I [IV, B] . When SLN is not detected on either side, LN dissection should include the usual areas: obturator fossa, external iliac regions, common iliac regions, and presacral region [III, A] . Based on the LACC trial findings, laparotomy remains the recommended approach for radical parametrectomy procedures due to the higher risk of relapse associated with minimally invasive surgery (MIS) [I, A] . However, a retrospective multicenter study found no increased risk of relapse associated with minimally invasive surgery (MIS) in a low-risk group of patients with small tumors (< 2 cm) following conization with clear margins and with MIS being regarded as acceptable [IV, C] . The recent SHAPE study suggests that for early-stage, low-risk cervical carcinoma (FIGO stages [2018] IA2–IB1 ≤ 2 cm with limited stromal invasion), simple total HT could be considered, inasmuch as it has demonstrated non-inferiority to radical HT in 3-year pelvic recurrence, recurrence-free survival, or overall survival (OS) rates . When surgery is not feasible, consider definitive CRT and brachytherapy (BT) [IV, B] . Fertility-sparing treatment Fertility-sparing therapy is suitable for young patients with tumors < 2 cm (stage IA and IB1), with squamous cell carcinoma or HPV-related ADC [III, B] . A thorough counseling on disease and pregnancy risks is recommended. Approaches vary depending on tumor stage and LVSI status. In T1a1/T1a2/T1b1 tumors both conization and simple trachelectomy can be recommended, regardless of LVSI presence [IV, B] , while in T1b1, radical trachelectomy remains an option [IV, B] but is strongly advised in LVSI-positive cases [III, B] . LN staging is recommended following the principles of early-stage management. [III, B] . T1a1 disease: conization with negative margins should be considered [IV, C]. Sentinel lymph node (SLN) biopsy is worth considering in LVSI-positive cases [IV, B] . T1a2 disease: conization with clear margins or a simple hysterectomy (HT) is deemed sufficient [IV, B] . While SLN biopsy can be contemplated for LVSI-negative patients, it is recommended for those with LVSI-positive cases [IV, B] . For patients diagnosed with stage IB1, IB2, or IIA1, surgery stands as the most suitable choice [I, A] . The initial surgical step should involve LN staging [IV, A] . SLN mapping and any suspicious nodes should be removed intraoperatively [III, A] . If any LN involvement is detected intraoperatively, refrain from further surgical procedures, opting instead for definitive concurrent chemoradiotherapy (CRT) [III, A] . In these cases, consider para-aortic lymph node dissection (PALND) for staging purposes [IV, C] . If both sides reveal negative SLN in pelvic level I, LN dissection can be confined to level I [IV, B] . When SLN is not detected on either side, LN dissection should include the usual areas: obturator fossa, external iliac regions, common iliac regions, and presacral region [III, A] . Based on the LACC trial findings, laparotomy remains the recommended approach for radical parametrectomy procedures due to the higher risk of relapse associated with minimally invasive surgery (MIS) [I, A] . However, a retrospective multicenter study found no increased risk of relapse associated with minimally invasive surgery (MIS) in a low-risk group of patients with small tumors (< 2 cm) following conization with clear margins and with MIS being regarded as acceptable [IV, C] . The recent SHAPE study suggests that for early-stage, low-risk cervical carcinoma (FIGO stages [2018] IA2–IB1 ≤ 2 cm with limited stromal invasion), simple total HT could be considered, inasmuch as it has demonstrated non-inferiority to radical HT in 3-year pelvic recurrence, recurrence-free survival, or overall survival (OS) rates . When surgery is not feasible, consider definitive CRT and brachytherapy (BT) [IV, B] . Fertility-sparing therapy is suitable for young patients with tumors < 2 cm (stage IA and IB1), with squamous cell carcinoma or HPV-related ADC [III, B] . A thorough counseling on disease and pregnancy risks is recommended. Approaches vary depending on tumor stage and LVSI status. In T1a1/T1a2/T1b1 tumors both conization and simple trachelectomy can be recommended, regardless of LVSI presence [IV, B] , while in T1b1, radical trachelectomy remains an option [IV, B] but is strongly advised in LVSI-positive cases [III, B] . LN staging is recommended following the principles of early-stage management. [III, B] . Intermediate risk In the absence of positive LN, pathology risk factors in the surgical specimens include size > 4 cm, deep cervical stromal invasion, and positive LVSI. According to Sedlis criteria, when two or more of these features are identified, CC is classified as intermediate risk. This group of patients are treated disparately around the world, including study limitations that do not include other significant risk factors, such as histology and proximal margins, currently considered in the present landscape of CC treatment. In the original GOG-092 trial, 277 patients with two or more risk features were randomized to observation vs external beam radiation therapy (EBRT). With a median follow-up of 10 years, a significant benefit was demonstrated in terms of progression free-survival (PFS) HR 0.54, 95% CI 0.35–0.81, p = 0.007, albeit not OS (HR 0.7; p = 0.07) [II, B] . The role of chemotherapy (ChT) in this population is presently the object of research in the GOG-263 trial. High risk If positive pelvic LN, positive surgical margins, and/or positive parametrium are identified, postoperative pelvic EBRT with concurrent platinum-containing ChT is recommended. In the GOG-109 trial, 268 women with IA2, IB, and IIA stage CC received adjuvant radiotherapy (RT) with or without ChT (cisplatin–5-fluorouracil) for 4 courses. The study evidenced that the ChT arm achieved better 4-year OS (81% vs. 71%) and PFS (80% vs. 63%) outcomes [I, A] . That being said, the current ChT regimen of choice is weekly cisplatin. The cisplatin dosage for this schedule is 40 mg/m 2 per week, with a 70-mg weekly limit based on other concurrent trials in locally advanced disease. In the absence of positive LN, pathology risk factors in the surgical specimens include size > 4 cm, deep cervical stromal invasion, and positive LVSI. According to Sedlis criteria, when two or more of these features are identified, CC is classified as intermediate risk. This group of patients are treated disparately around the world, including study limitations that do not include other significant risk factors, such as histology and proximal margins, currently considered in the present landscape of CC treatment. In the original GOG-092 trial, 277 patients with two or more risk features were randomized to observation vs external beam radiation therapy (EBRT). With a median follow-up of 10 years, a significant benefit was demonstrated in terms of progression free-survival (PFS) HR 0.54, 95% CI 0.35–0.81, p = 0.007, albeit not OS (HR 0.7; p = 0.07) [II, B] . The role of chemotherapy (ChT) in this population is presently the object of research in the GOG-263 trial. If positive pelvic LN, positive surgical margins, and/or positive parametrium are identified, postoperative pelvic EBRT with concurrent platinum-containing ChT is recommended. In the GOG-109 trial, 268 women with IA2, IB, and IIA stage CC received adjuvant radiotherapy (RT) with or without ChT (cisplatin–5-fluorouracil) for 4 courses. The study evidenced that the ChT arm achieved better 4-year OS (81% vs. 71%) and PFS (80% vs. 63%) outcomes [I, A] . That being said, the current ChT regimen of choice is weekly cisplatin. The cisplatin dosage for this schedule is 40 mg/m 2 per week, with a 70-mg weekly limit based on other concurrent trials in locally advanced disease. Locally advanced disease is defined as FIGO stages IB2, II, III, and IVA. Radical treatment with EBRT and weekly cisplatin followed by brachytherapy has demonstrated benefit in five, phase 3 trials, as well as in a Cochrane meta-analysis . This approach results in a 10% increase in OS and a 50% decrease in the risk of relapse and is the current standard of care [I, A] . The alternative in case of renal impairment could be weekly carboplatin AUC2. Adjuvant chemotherapy The potential role of adjuvant ChT following a concurrent treatment modality has been addressed in two, phase 3 trials with contradictory results and has not been endorsed as standard treatment. While adding two cycles of adjuvant carboplatin and gemcitabine increased PFS and OS, toxicity was an issue. On the other hand, the OUTBACK trial, with four cycles of carboplatin and paclitaxel, failed to increase either PFS or OS . The main concerns have consistently been the low adherence of ChT after concurrent treatment and toxicity. Addition of immunotherapy The KEYNOTE-A18 study has recently been published that examined the efficacy of adding pembrolizumab to standard CRT in patients with high-risk, locally advanced CC (FIGO 2014 stage IB2-IIB with node-positive disease or stage III-IVA). The results revealed an increase in PFS with a HR 0.70 (0.55–0.89, p = 0.0020) and 24-month OS of 87% in the pembrolizumab-chemoradiotherapy group and 81% in the placebo-chemoradiotherapy group. On 12 January 2024, the Food and Drug Administration (FDA) approved pembrolizumab with chemoradiotherapy (CRT) for patients with FIGO 2014 stage III-IVA CC and is now under review by the European Medicines Agency . Therefore, the addition of pembrolizumab to CRT will likely become the standard treatment for LACC in the near future [I, A] . This combination is not approved by the European Medicines Agency (EMA) for cervical cancer and is not reimbursed by the Spanish public healthcare system, at the time of writing this document. Neoadjuvant/induction chemotherapy The Neoadjuvant/Induction ChT approach has been addressed in two different settings: The first one is to make locally advanced CC amenable to surgery and compares ChT followed by surgery to standard concurrent CRT. Two large, phase 3 trials failed to prove improved OS and a metanalysis that included smaller studies has not modified standard treatment . The second approach involves induction ChT in LACC prior to standard CRT compared to CRT. The GCIG INTERLACE trial randomized 500 IB1 node positive, IB2, II, IIIB, and IVA (FIGO 2008) patients to 6 cycles of weekly carboplatin (AUC 2) and paclitaxel (80 mg/m 2 ) before CRT versus CRT alone. This has shown a 5-year PFS rate of 73% and OS rate of with induction ChT prior to CRT compared to 64% (HR 0.65; 95% CI 0.46–0.91; p = 0.013) and 72% (HR 0.61; 95% CI 0.40–0.91; p = 0.04), respectively, with CRT alone [I, B] . Recommendations Primary weekly cisplatin-based (40 mg/m 2 ) CRT remains the standard of care for LACC until immunotherapy is approved [I, A] . Induction chemotherapy with the INTERLACE regimen before definitive CRT might be an option for selected patients [I, B] . Adjuvant ChT after CRT is not recommended [I, D] . Neoadjuvant ChT before radical surgery is not a standard approach in LACC [I, D] . The addition of pembrolizumab to CRT will likely become the standard treatment for LACC [I, A] . The potential role of adjuvant ChT following a concurrent treatment modality has been addressed in two, phase 3 trials with contradictory results and has not been endorsed as standard treatment. While adding two cycles of adjuvant carboplatin and gemcitabine increased PFS and OS, toxicity was an issue. On the other hand, the OUTBACK trial, with four cycles of carboplatin and paclitaxel, failed to increase either PFS or OS . The main concerns have consistently been the low adherence of ChT after concurrent treatment and toxicity. The KEYNOTE-A18 study has recently been published that examined the efficacy of adding pembrolizumab to standard CRT in patients with high-risk, locally advanced CC (FIGO 2014 stage IB2-IIB with node-positive disease or stage III-IVA). The results revealed an increase in PFS with a HR 0.70 (0.55–0.89, p = 0.0020) and 24-month OS of 87% in the pembrolizumab-chemoradiotherapy group and 81% in the placebo-chemoradiotherapy group. On 12 January 2024, the Food and Drug Administration (FDA) approved pembrolizumab with chemoradiotherapy (CRT) for patients with FIGO 2014 stage III-IVA CC and is now under review by the European Medicines Agency . Therefore, the addition of pembrolizumab to CRT will likely become the standard treatment for LACC in the near future [I, A] . This combination is not approved by the European Medicines Agency (EMA) for cervical cancer and is not reimbursed by the Spanish public healthcare system, at the time of writing this document. The Neoadjuvant/Induction ChT approach has been addressed in two different settings: The first one is to make locally advanced CC amenable to surgery and compares ChT followed by surgery to standard concurrent CRT. Two large, phase 3 trials failed to prove improved OS and a metanalysis that included smaller studies has not modified standard treatment . The second approach involves induction ChT in LACC prior to standard CRT compared to CRT. The GCIG INTERLACE trial randomized 500 IB1 node positive, IB2, II, IIIB, and IVA (FIGO 2008) patients to 6 cycles of weekly carboplatin (AUC 2) and paclitaxel (80 mg/m 2 ) before CRT versus CRT alone. This has shown a 5-year PFS rate of 73% and OS rate of with induction ChT prior to CRT compared to 64% (HR 0.65; 95% CI 0.46–0.91; p = 0.013) and 72% (HR 0.61; 95% CI 0.40–0.91; p = 0.04), respectively, with CRT alone [I, B] . Recommendations Primary weekly cisplatin-based (40 mg/m 2 ) CRT remains the standard of care for LACC until immunotherapy is approved [I, A] . Induction chemotherapy with the INTERLACE regimen before definitive CRT might be an option for selected patients [I, B] . Adjuvant ChT after CRT is not recommended [I, D] . Neoadjuvant ChT before radical surgery is not a standard approach in LACC [I, D] . The addition of pembrolizumab to CRT will likely become the standard treatment for LACC [I, A] . Primary weekly cisplatin-based (40 mg/m 2 ) CRT remains the standard of care for LACC until immunotherapy is approved [I, A] . Induction chemotherapy with the INTERLACE regimen before definitive CRT might be an option for selected patients [I, B] . Adjuvant ChT after CRT is not recommended [I, D] . Neoadjuvant ChT before radical surgery is not a standard approach in LACC [I, D] . The addition of pembrolizumab to CRT will likely become the standard treatment for LACC [I, A] . Patients suspected of recurrent disease require a thorough diagnostic work-up and the recurrence should be histologically confirmed. Central pelvic recurrence The recommended treatment following primary surgery includes definitive CRT combined with BT. External boost techniques should not replace BT. For previously irradiated patients, pelvic exenteration based on tumor location is suggested. This recommendation is typically reserved for referral centers with specialized expertise in managing persistent or recurrent CC cases. Reirradiation should be selectively weighed, considering factors such as disease volume, time since prior RT, and total dose administered. Pelvic sidewall recurrence After primary surgery, CRT is the preferred option. If not feasible, extensive pelvic surgery should be considered, including intra-operative RT or when free surgical margins are not feasible. For those who received prior RT, extensive pelvic surgery is the first option. Patients ineligible for surgery due to comorbidities or a low probability of complete resection should receive systemic ChT. Recommendations Pelvic exenteration is recommended for central pelvic recurrence where there is no involvement of the pelvic sidewall, extrapelvic nodes, or peritoneal disease [IV, B]. Reirradiation for central recurrences could be considered in selected cases. This must be performed only in specialized centers [IV, C]. In patients with pelvic sidewall involvement, extended pelvic surgery can be considered in specialized centers [IV, B]. Patients who are not candidates for extensive surgery should be treated with systemic chemotherapy [IV, B]. The recommended treatment following primary surgery includes definitive CRT combined with BT. External boost techniques should not replace BT. For previously irradiated patients, pelvic exenteration based on tumor location is suggested. This recommendation is typically reserved for referral centers with specialized expertise in managing persistent or recurrent CC cases. Reirradiation should be selectively weighed, considering factors such as disease volume, time since prior RT, and total dose administered. After primary surgery, CRT is the preferred option. If not feasible, extensive pelvic surgery should be considered, including intra-operative RT or when free surgical margins are not feasible. For those who received prior RT, extensive pelvic surgery is the first option. Patients ineligible for surgery due to comorbidities or a low probability of complete resection should receive systemic ChT. Recommendations Pelvic exenteration is recommended for central pelvic recurrence where there is no involvement of the pelvic sidewall, extrapelvic nodes, or peritoneal disease [IV, B]. Reirradiation for central recurrences could be considered in selected cases. This must be performed only in specialized centers [IV, C]. In patients with pelvic sidewall involvement, extended pelvic surgery can be considered in specialized centers [IV, B]. Patients who are not candidates for extensive surgery should be treated with systemic chemotherapy [IV, B]. Pelvic exenteration is recommended for central pelvic recurrence where there is no involvement of the pelvic sidewall, extrapelvic nodes, or peritoneal disease [IV, B]. Reirradiation for central recurrences could be considered in selected cases. This must be performed only in specialized centers [IV, C]. In patients with pelvic sidewall involvement, extended pelvic surgery can be considered in specialized centers [IV, B]. Patients who are not candidates for extensive surgery should be treated with systemic chemotherapy [IV, B]. The risk of recurrence ranges from 16 to 30% in early stages and up to 70% in LACC. Most relapses occur within the first two years after diagnosis and 50–60% of patients will have disease beyond the pelvis. Subjects who develop distant metastases, either at initial presentation or at relapse, are rarely curable. For highly selected patients with isolated distant metastases amenable to local treatment, occasional long-term survival has been reported . ChT is often recommended for patients with extrapelvic metastases or recurrent disease who are not candidates for RT or exenterative surgery. First-line treatment Cisplatin has been regarded as the most effective agent for metastatic CC . Cisplatin-based doublets with topotecan or paclitaxel have demonstrated superiority over cisplatin monotherapy in terms of response rate and PFS . Cisplatin/paclitaxel is less toxic than cisplatin/topotecan and is considered the regimen of choice [II, B] . Tumor angiogenesis plays a significant role in CC. The GOG240 phase III trial examined the addition of bevacizumab to combination ChT regimens in the first line metastatic setting (cisplatin/paclitaxel or topotecan/paclitaxel) in 452 patients with metastatic, persistent, or recurrent CC in the context of first-line treatment . The study revealed significant improvements in OS among patients receiving bevacizumab (16.8 months vs 13.3 months; HR 0.77; 95% CI 0.62–0.95; p = 0.007). Additionally, data from a phase III randomized trial (JCOG0505) suggested that carboplatin/paclitaxel was non-inferior to cisplatin/paclitaxel in 253 patients with metastatic or recurrent CC . However, cisplatin remains the key drug for patients who have not previously received platinum agents. Furthermore, the phase II CECILIA trial proved that bevacizumab can be safely combined with carboplatin-paclitaxel, with the incidence of fistula/gastrointestinal perforation aligning with that observed in the GOG240 study . Given these results and based on the balance between efficacy and toxicity, paclitaxel and platinum ChT combined with bevacizumab was deemed the regimen of choice in first-line metastatic or recurrent CC. Programmed death ligand 1 (PD-L1) also plays a role in CC pathogenesis . In the phase II KEYNOTE-158 trial, an objective response to pembrolizumab was noted in 14.3% of patients with PD-L1 positive tumors who had received > 1 prior ChT regimens for recurrent or metastatic disease . This treatment is not approved by the European Medicines Agency (EMA) for cervical cancer and is not reimbursed by the Spanish public healthcare system, at the time of writing this document. The results of the KEYNOTE-826 trial displayed that PFS and OS were significantly greater with pembrolizumab than with placebo among patients with persistent, recurrent, or metastatic CC who were also receiving platinum-based chemotherapy with or without bevacizumab. The addition of pembrolizumab significantly improved PFS (10.4 months vs 8.2 months HR 0.62; 95% CI 0.50–0.77, p < 0.001) and OS (28.6 vs 16.5 months HR 0.60; 95% CI 0.49–0.74), leading to regulatory approval of pembrolizumab for persistent, recurrent, or metastatic CC tumors expressing PD-L1 with a combined positive score (CPS) ≥ 1 [I, A] . In the small subgroup of patients with a CPS < 1, the hazard ratios of PFS and OS were close to 1. Given the small size of that subgroup (11.2% of the patients), the effect of adding pembrolizumab appears to be small . Recently, in the phase III BEATcc trial, patients with metastatic (stage IVB), persistent, or recurrent CC were randomly assigned in a 1:1 ratio to receive bevacizumab plus platinum and paclitaxel, with or without atezolizumab. BEATcc evaluated the PD-L1 inhibitor atezolizumab in a biomarker-unselected population and the use of bevacizumab was mandatory. Median PFS was 13.7 months with atezolizumab compared to 10.4 months with standard therapy (HR 0.62; 95% CI 0.49–0.78, p < 0.0001). Median OS was 32.1 months with atezolizumab compared to 22.8 months with standard therapy (HR 0.68; 95% CI 0.52–0.88, p = 0.0046) . This combination is not approved by the European Medicines Agency (EMA) for cervical cancer and is not reimbursed by the Spanish public healthcare system, at the time of writing this document. Second-line and single agents In patients progressing to first-line therapy, several chemotherapies, such as vinorelbine, topotecan, gemcitabine, or paclitaxel, have been examined. However, response rates to these treatments were very low (10–13%) and had a short duration. To determine if the immune checkpoint inhibitors after failure to platinum therapy were superior to standard ChT in terms of OS, the phase 3 GOG 3016/ENGOT-cx9 (EMPOWER Cervical-1) trial randomized 608 patients to receive cemiplimab or investigator’s choice of intravenous ChT. Cemiplimab exhibited a statistically significant improvement in OS compared to ChT (12.0 months vs 8.5 months; HR 0.69; 95% CI 0.56–0.84; p < 0.001), in both SCC and the entire population, regardless of PD-L1 status. These results led to cemiplimab receiving regulatory approval as monotherapy to treat patients with recurrent or metastatic CC and disease progression on or after platinum-based ChT [I, A] . Tisotumab vedotin (TV) is an antibody–drug conjugate that targets tissue factor. TV revealed promising and durable responses in the treatment of patients with recurrent or metastatic CC in a phase 2 study, which led to its accelerated approval in the US . Recently, the global phase III innovaTV 301/ENGOT-cx12/GOG-3057 trial, randomized patients with recurrent or metastatic CC with progression on or after standard of care to TV monotherapy or the investigator’s choice of chemotherapy. The TV arm exhibited a 30% reduction in risk of death vs chemotherapy (HR 0.70; 95% CI 0.54–0.89; p = 0.0038), with significantly longer median OS (11.5 months vs 9.5 months). This treatment is not approved by the European Medicines Agency (EMA) for cervical cancer and is not reimbursed by the Spanish public healthcare system, at the time of writing this document. Recommendations Platinum-based ChT combined with pembrolizumab is recommended for medically fit patients with recurrent/metastatic PD-L1 positive CC, assessed as CPS of 1 or more [I, A]. Carboplatin/paclitaxel and cisplatin/paclitaxel are the preferred regimens [I, A]. The addition of bevacizumab is recommended when the risk of significant gastrointestinal/genitourinary fistula has been carefully assessed and discussed with the patient [I, A]. Patients who progress after first-line platinum-based ChT and have not yet received immunotherapy should be offered cemiplimab, regardless of PD-L1 tumor status [I, A] (Fig. ). Cisplatin has been regarded as the most effective agent for metastatic CC . Cisplatin-based doublets with topotecan or paclitaxel have demonstrated superiority over cisplatin monotherapy in terms of response rate and PFS . Cisplatin/paclitaxel is less toxic than cisplatin/topotecan and is considered the regimen of choice [II, B] . Tumor angiogenesis plays a significant role in CC. The GOG240 phase III trial examined the addition of bevacizumab to combination ChT regimens in the first line metastatic setting (cisplatin/paclitaxel or topotecan/paclitaxel) in 452 patients with metastatic, persistent, or recurrent CC in the context of first-line treatment . The study revealed significant improvements in OS among patients receiving bevacizumab (16.8 months vs 13.3 months; HR 0.77; 95% CI 0.62–0.95; p = 0.007). Additionally, data from a phase III randomized trial (JCOG0505) suggested that carboplatin/paclitaxel was non-inferior to cisplatin/paclitaxel in 253 patients with metastatic or recurrent CC . However, cisplatin remains the key drug for patients who have not previously received platinum agents. Furthermore, the phase II CECILIA trial proved that bevacizumab can be safely combined with carboplatin-paclitaxel, with the incidence of fistula/gastrointestinal perforation aligning with that observed in the GOG240 study . Given these results and based on the balance between efficacy and toxicity, paclitaxel and platinum ChT combined with bevacizumab was deemed the regimen of choice in first-line metastatic or recurrent CC. Programmed death ligand 1 (PD-L1) also plays a role in CC pathogenesis . In the phase II KEYNOTE-158 trial, an objective response to pembrolizumab was noted in 14.3% of patients with PD-L1 positive tumors who had received > 1 prior ChT regimens for recurrent or metastatic disease . This treatment is not approved by the European Medicines Agency (EMA) for cervical cancer and is not reimbursed by the Spanish public healthcare system, at the time of writing this document. The results of the KEYNOTE-826 trial displayed that PFS and OS were significantly greater with pembrolizumab than with placebo among patients with persistent, recurrent, or metastatic CC who were also receiving platinum-based chemotherapy with or without bevacizumab. The addition of pembrolizumab significantly improved PFS (10.4 months vs 8.2 months HR 0.62; 95% CI 0.50–0.77, p < 0.001) and OS (28.6 vs 16.5 months HR 0.60; 95% CI 0.49–0.74), leading to regulatory approval of pembrolizumab for persistent, recurrent, or metastatic CC tumors expressing PD-L1 with a combined positive score (CPS) ≥ 1 [I, A] . In the small subgroup of patients with a CPS < 1, the hazard ratios of PFS and OS were close to 1. Given the small size of that subgroup (11.2% of the patients), the effect of adding pembrolizumab appears to be small . Recently, in the phase III BEATcc trial, patients with metastatic (stage IVB), persistent, or recurrent CC were randomly assigned in a 1:1 ratio to receive bevacizumab plus platinum and paclitaxel, with or without atezolizumab. BEATcc evaluated the PD-L1 inhibitor atezolizumab in a biomarker-unselected population and the use of bevacizumab was mandatory. Median PFS was 13.7 months with atezolizumab compared to 10.4 months with standard therapy (HR 0.62; 95% CI 0.49–0.78, p < 0.0001). Median OS was 32.1 months with atezolizumab compared to 22.8 months with standard therapy (HR 0.68; 95% CI 0.52–0.88, p = 0.0046) . This combination is not approved by the European Medicines Agency (EMA) for cervical cancer and is not reimbursed by the Spanish public healthcare system, at the time of writing this document. In patients progressing to first-line therapy, several chemotherapies, such as vinorelbine, topotecan, gemcitabine, or paclitaxel, have been examined. However, response rates to these treatments were very low (10–13%) and had a short duration. To determine if the immune checkpoint inhibitors after failure to platinum therapy were superior to standard ChT in terms of OS, the phase 3 GOG 3016/ENGOT-cx9 (EMPOWER Cervical-1) trial randomized 608 patients to receive cemiplimab or investigator’s choice of intravenous ChT. Cemiplimab exhibited a statistically significant improvement in OS compared to ChT (12.0 months vs 8.5 months; HR 0.69; 95% CI 0.56–0.84; p < 0.001), in both SCC and the entire population, regardless of PD-L1 status. These results led to cemiplimab receiving regulatory approval as monotherapy to treat patients with recurrent or metastatic CC and disease progression on or after platinum-based ChT [I, A] . Tisotumab vedotin (TV) is an antibody–drug conjugate that targets tissue factor. TV revealed promising and durable responses in the treatment of patients with recurrent or metastatic CC in a phase 2 study, which led to its accelerated approval in the US . Recently, the global phase III innovaTV 301/ENGOT-cx12/GOG-3057 trial, randomized patients with recurrent or metastatic CC with progression on or after standard of care to TV monotherapy or the investigator’s choice of chemotherapy. The TV arm exhibited a 30% reduction in risk of death vs chemotherapy (HR 0.70; 95% CI 0.54–0.89; p = 0.0038), with significantly longer median OS (11.5 months vs 9.5 months). This treatment is not approved by the European Medicines Agency (EMA) for cervical cancer and is not reimbursed by the Spanish public healthcare system, at the time of writing this document. Recommendations Platinum-based ChT combined with pembrolizumab is recommended for medically fit patients with recurrent/metastatic PD-L1 positive CC, assessed as CPS of 1 or more [I, A]. Carboplatin/paclitaxel and cisplatin/paclitaxel are the preferred regimens [I, A]. The addition of bevacizumab is recommended when the risk of significant gastrointestinal/genitourinary fistula has been carefully assessed and discussed with the patient [I, A]. Patients who progress after first-line platinum-based ChT and have not yet received immunotherapy should be offered cemiplimab, regardless of PD-L1 tumor status [I, A] (Fig. ). Platinum-based ChT combined with pembrolizumab is recommended for medically fit patients with recurrent/metastatic PD-L1 positive CC, assessed as CPS of 1 or more [I, A]. Carboplatin/paclitaxel and cisplatin/paclitaxel are the preferred regimens [I, A]. The addition of bevacizumab is recommended when the risk of significant gastrointestinal/genitourinary fistula has been carefully assessed and discussed with the patient [I, A]. Patients who progress after first-line platinum-based ChT and have not yet received immunotherapy should be offered cemiplimab, regardless of PD-L1 tumor status [I, A] (Fig. ). Follow-up recommendations in CC are based on the individual risk of recurrence depending on prognostic factors, treatment approach, and patient characteristics, although there is no current evidence of the most appropriate strategy. Follow-up should be more thorough during the first 2–3 years after primary treatment, as this is when the majority of recurrences typically occur, especially in high-risk patients . History and complete physical examination, including vaginal and pelvic-rectal examination performed by a specialist, are recommended at each visit. Systematic cervical and/or vaginal cytology after CRT or surgery has a low positive predictive value for detecting recurrence. HPV testing could be useful instead, albeit strong evidence is still lacking. For high-risk patients with stage II or greater, CT or PET/CT (preferred) and pelvic MRI (recommended), should be performed within 3–6 months of completing therapy. A reasonable follow-up schedule involves visits every 3–6 months during the first two years and every 6–12 months during years 3–5. Table summarizes our recommendations for follow-up. The role of additional imaging has not been well established and should be guided by symptoms and clinical concern for suspected recurrent/metastatic disease. Patients should return to annual population-based general physical and pelvic examinations after five years of recurrence-free follow-up . Following treatment, patients should be educated about signs/symptoms suggestive of recurrence as a relevant part of the surveillance plan. Early use of vaginal dilators concurrent with lubricants and topical estrogen is recommended for suitable sexual rehabilitation. Patients should be informed about the possible benefits of healthy lifestyle habits in reducing the risk of recurrence and improving overall well-being. Additional table summary of recommendations |
The Promise of Artificial Intelligence in Digestive Healthcare and the Bioethics Challenges It Presents | bc6e08b1-3829-4c9f-bf88-751d6d26fd90 | 10145124 | Internal Medicine[mh] | Medicine is advancing swiftly into the era of Big Data, particularly through the more widespread use of Electronic Health Records (EHRs) and the digitalization of clinical data, intensifying the demands on informatics solutions in healthcare settings. Like all major advances throughout history, the benefits on offer are associated with new rules of engagement. Some 50 years have passed since what is considered to have been the birth of Artificial Intelligence (AI) at the Dartmouth Summer Research Project . This was an intensive 2-month project that set out to obtain solutions to the problems that are faced when attempting to make a machine that can simulate human intelligence. However, it was not until some years later before the first efforts to design biomedical computing solutions based on AI were seen . These efforts are beginning to bear their fruit, and since the turn of the century, we have witnessed truly significant advances in this field, particularly in terms of medical image analysis . Indeed, a search for publications in the PubMed database using the terms “Artificial Intelligence” and “Gastrointestinal Endoscopy” returned 3 articles in 2017, as opposed to 42 in 2022 and 64 in 2021. While the true impact of these practices is yet to be seen in the clinic, their goals are clear: (i) to offer patients more personalized healthcare; (ii) to achieve greater diagnostic/prognostic accuracy; (iii) to reduce human error in clinical practice; and (iv) to reduce the time demands on clinicians as well as enhancing the efficiency of healthcare services. However, the introduction of these tools raises important bioethical issues. Consequently, and before attempting to reap the benefits that they have to offer, it is important to assess how these advances affect patient–clinician relationships , what impact they will have on medical decision making, and how these potential improvements in diagnostic accuracy and efficiency will affect the different healthcare systems around the world. 1.1. The State-of-the-Art in Gastroenterology A number of medical specialties such as Gastroenterology rely heavily on medical images to establish disease diagnosis and patient prognosis, as well as to monitor disease progression. Moreover, in more recent times, some such imaging techniques have been adapted so that they can potentially deliver therapeutic interventions . The digitalization of medical imaging has paved the way for important advances in this field, including the design of AI solutions to aid image acquisition and analysis . Different endoscopy modalities can be used to visualize and monitor the Gastrointestinal (GI) tract, making this an area in which AI models and applications could play an important future role. Indeed, this is reflected in the attempts to design AI-based tools addressing distinct aspects of these examinations and adapting to the different endoscopy techniques employed in the clinic. Accordingly, the development of such AI tools has been the focus of considerable effort of late, mainly with a view to improving the diagnostic accuracy of GI imaging and streamlining these procedures . The term AI is overarching, yet in the context of medical imaging, it can perhaps be more precisely defined by the machine learning (ML) class of AI applications, algorithms that are specifically used to recognize patterns in complex datasets . “Supervised” or “unsupervised” ML models exist; although, the former is perhaps of more interest in this context as they are better suited to attempts at predicting known outputs (e.g., a specific change in a tissue or organ, the presence of a lesion in the mucosa or debris in the tract, etc.). Multi-layered Convolutional Neural Networks (CNNs) are a specific type of deep learning (DL) model, a modality of ML. Significantly, CNNs excel in the analysis, differentiation and classification of medical images and videos, essentially due to their artificial resemblance to neurobiological processes . As might be expected, there have been significant technical advances in endoscopy over the years. Indeed, two decades have now passed since Capsule Endoscopy (CE: also known as Wireless or Video CE) was shown to be a valid minimally invasive diagnostic tool to visualise the intestine in its entirety, including the small bowel (SB) and colon . CE systems involve the use of three main elements. Firstly, there is the capsule that houses the camera, and now perhaps multiple cameras, as well as a light source, a transmitter and a battery. The second element is a sensor system that is necessary to receive the information transmitted by the capsule and that is connected to a recording system. Finally, there is the software required to display the endoscopy images so they can be examined. All these CE elements have undergone significant improvements since they were initially developed. For example, there have been numerous improvements to the capsules (e.g., in their frame acquisition rates, their angle of vision, the number of cameras, and manoeuvrability), as well as to the software used to visualise and examine the images obtained. One of the benefits of CE is that it offers the possibility of examining less inaccessible regions of the intestine, such as the SB, structures that are difficult to access using standard endoscopy protocols. Consequently, CE can be used to evaluate conditions that are complicated to diagnose clearly, such as chronic GI bleeding, tumours and especially SB tumours; mucosal damage; Crohn’s disease (CD); chronic iron-deficiency anaemia; GI polyposis; or celiac disease . There are also fewer contraindications associated with the use of CE; although, these may include disorders of GI motility, GI tract narrowing/obstruction, dysphagia, large GI diverticula or intestinal fistula. Despite the evolution of these systems over the past two decades, they still face a number of challenges, and these will be the target of future improvements. As indicated, software used to aid in the reading and evaluation of the images acquired by CE has also been developed, on the whole, through efforts to decrease the reading times associated with these tests and the accuracy of the results obtained. The time that trained gastroenterologists must dedicate to the analysis of CE examinations is a particularly critical issue, given the number of images generated (ca. 50,000). As such, considerable effort is required to ensure adequate diagnostic yields, with the high associated costs. Accordingly, the main limitation for CE, and particularly Colon Capsule Endoscopy (CCE), as a first-line procedure for the panendoscopic analysis of the entire GI mucosa, is that it is a relatively time-consuming and laborious diagnostic test that requires some expertise in image analysis. In fact, the diagnostic yield for CE is in part hampered by the monotonous and laborious human CE video analysis, which translates into suboptimal diagnostic accuracy, particularly in terms of sensitivity and negative predictive value (NPV). It must also be considered that alterations may only be evident in a few of the frames extracted from CE examinations, which means there is a significant chance that important lesions might be overlooked . Indeed, the inter- and intra-operator error associated with the reading process is one of the main sources of error in these examinations. As a result, there has been much interest from an early stage in the development of these systems to design software that can be used to automatically detect certain features in the images obtained. For example, there have been attempts to include support vector machines (SVMs) within CE systems, in particular for the detection of blood/hematic traces . In this sense, one of the most interesting recent and future developments in CE is the possible incorporation of AI algorithms to automate the detection, differentiation and stratification of specific features of the GI images obtained . 1.2. Automated Analysis and AI Tools to Examine the GI Tract Several studies have showcased the potential of using CNNs in different areas of digestive endoscopy. For example, when performing such examinations, the preparation and cleanliness of the GI tract are fundamental to ensure the validity of the results obtained. Nevertheless, clearly validated scales to assess this feature of endoscopy examinations are still lacking, which has inspired efforts to design AI tools based on CNN models that can automatically evaluate GI tract cleanliness in these tests . Obviously, and in line with the advances in other areas of medicine, many studies have centred on the design of AI tools capable of detecting lesions on or alterations to the GI mucosa likely to be associated with disease , as well as specific characteristics of these changes. Indeed, the potential to apply these systems in real time could offer important benefits to the clinician, particularly when contemplating conditions that require prompt diagnosis and treatment. Moreover, these systems could potentially be used in combination or in conjunction with other AI tools, such as those designed to assess the quality of preparation, or in attempts to not only identify lesions but to also establish their malignant potential . We must also consider that the implementation of AI tools for healthcare administration is likely to have a direct effect on gastroenterology, as it will on other clinical areas. Thus, in light of the increase in the number of AI applications being generated that may potentially be integrated into standard healthcare, it becomes more urgent to address the bioethical issues that surround their use before they are implemented in clinical practice. In this sense, it is important to note that while existing frameworks could be adjusted to regulate the use of clinical AI applications, their disruptive nature makes it more likely that new ‘purpose-built’ regulatory frameworks and guidelines should be drawn up from which regulations can be defined. Moreover, in this process, it will be important to ensure that the AI innovations they are designed to control are enhanced and not limited by the regulations drawn up.
A number of medical specialties such as Gastroenterology rely heavily on medical images to establish disease diagnosis and patient prognosis, as well as to monitor disease progression. Moreover, in more recent times, some such imaging techniques have been adapted so that they can potentially deliver therapeutic interventions . The digitalization of medical imaging has paved the way for important advances in this field, including the design of AI solutions to aid image acquisition and analysis . Different endoscopy modalities can be used to visualize and monitor the Gastrointestinal (GI) tract, making this an area in which AI models and applications could play an important future role. Indeed, this is reflected in the attempts to design AI-based tools addressing distinct aspects of these examinations and adapting to the different endoscopy techniques employed in the clinic. Accordingly, the development of such AI tools has been the focus of considerable effort of late, mainly with a view to improving the diagnostic accuracy of GI imaging and streamlining these procedures . The term AI is overarching, yet in the context of medical imaging, it can perhaps be more precisely defined by the machine learning (ML) class of AI applications, algorithms that are specifically used to recognize patterns in complex datasets . “Supervised” or “unsupervised” ML models exist; although, the former is perhaps of more interest in this context as they are better suited to attempts at predicting known outputs (e.g., a specific change in a tissue or organ, the presence of a lesion in the mucosa or debris in the tract, etc.). Multi-layered Convolutional Neural Networks (CNNs) are a specific type of deep learning (DL) model, a modality of ML. Significantly, CNNs excel in the analysis, differentiation and classification of medical images and videos, essentially due to their artificial resemblance to neurobiological processes . As might be expected, there have been significant technical advances in endoscopy over the years. Indeed, two decades have now passed since Capsule Endoscopy (CE: also known as Wireless or Video CE) was shown to be a valid minimally invasive diagnostic tool to visualise the intestine in its entirety, including the small bowel (SB) and colon . CE systems involve the use of three main elements. Firstly, there is the capsule that houses the camera, and now perhaps multiple cameras, as well as a light source, a transmitter and a battery. The second element is a sensor system that is necessary to receive the information transmitted by the capsule and that is connected to a recording system. Finally, there is the software required to display the endoscopy images so they can be examined. All these CE elements have undergone significant improvements since they were initially developed. For example, there have been numerous improvements to the capsules (e.g., in their frame acquisition rates, their angle of vision, the number of cameras, and manoeuvrability), as well as to the software used to visualise and examine the images obtained. One of the benefits of CE is that it offers the possibility of examining less inaccessible regions of the intestine, such as the SB, structures that are difficult to access using standard endoscopy protocols. Consequently, CE can be used to evaluate conditions that are complicated to diagnose clearly, such as chronic GI bleeding, tumours and especially SB tumours; mucosal damage; Crohn’s disease (CD); chronic iron-deficiency anaemia; GI polyposis; or celiac disease . There are also fewer contraindications associated with the use of CE; although, these may include disorders of GI motility, GI tract narrowing/obstruction, dysphagia, large GI diverticula or intestinal fistula. Despite the evolution of these systems over the past two decades, they still face a number of challenges, and these will be the target of future improvements. As indicated, software used to aid in the reading and evaluation of the images acquired by CE has also been developed, on the whole, through efforts to decrease the reading times associated with these tests and the accuracy of the results obtained. The time that trained gastroenterologists must dedicate to the analysis of CE examinations is a particularly critical issue, given the number of images generated (ca. 50,000). As such, considerable effort is required to ensure adequate diagnostic yields, with the high associated costs. Accordingly, the main limitation for CE, and particularly Colon Capsule Endoscopy (CCE), as a first-line procedure for the panendoscopic analysis of the entire GI mucosa, is that it is a relatively time-consuming and laborious diagnostic test that requires some expertise in image analysis. In fact, the diagnostic yield for CE is in part hampered by the monotonous and laborious human CE video analysis, which translates into suboptimal diagnostic accuracy, particularly in terms of sensitivity and negative predictive value (NPV). It must also be considered that alterations may only be evident in a few of the frames extracted from CE examinations, which means there is a significant chance that important lesions might be overlooked . Indeed, the inter- and intra-operator error associated with the reading process is one of the main sources of error in these examinations. As a result, there has been much interest from an early stage in the development of these systems to design software that can be used to automatically detect certain features in the images obtained. For example, there have been attempts to include support vector machines (SVMs) within CE systems, in particular for the detection of blood/hematic traces . In this sense, one of the most interesting recent and future developments in CE is the possible incorporation of AI algorithms to automate the detection, differentiation and stratification of specific features of the GI images obtained .
Several studies have showcased the potential of using CNNs in different areas of digestive endoscopy. For example, when performing such examinations, the preparation and cleanliness of the GI tract are fundamental to ensure the validity of the results obtained. Nevertheless, clearly validated scales to assess this feature of endoscopy examinations are still lacking, which has inspired efforts to design AI tools based on CNN models that can automatically evaluate GI tract cleanliness in these tests . Obviously, and in line with the advances in other areas of medicine, many studies have centred on the design of AI tools capable of detecting lesions on or alterations to the GI mucosa likely to be associated with disease , as well as specific characteristics of these changes. Indeed, the potential to apply these systems in real time could offer important benefits to the clinician, particularly when contemplating conditions that require prompt diagnosis and treatment. Moreover, these systems could potentially be used in combination or in conjunction with other AI tools, such as those designed to assess the quality of preparation, or in attempts to not only identify lesions but to also establish their malignant potential . We must also consider that the implementation of AI tools for healthcare administration is likely to have a direct effect on gastroenterology, as it will on other clinical areas. Thus, in light of the increase in the number of AI applications being generated that may potentially be integrated into standard healthcare, it becomes more urgent to address the bioethical issues that surround their use before they are implemented in clinical practice. In this sense, it is important to note that while existing frameworks could be adjusted to regulate the use of clinical AI applications, their disruptive nature makes it more likely that new ‘purpose-built’ regulatory frameworks and guidelines should be drawn up from which regulations can be defined. Moreover, in this process, it will be important to ensure that the AI innovations they are designed to control are enhanced and not limited by the regulations drawn up.
The potential benefits that are provided by any new technology must be weighed up against any risks associated with its introduction. Accordingly, if the AI tools that are developed to be used with CE are to fulfil their potential, they must offer guarantees against significant risks, perhaps the most important of which are related to issues of privacy and data protection, unintentional bias in the data and design of the tools, transferability, explainability and responsibility . In addition, it is clear that this is a disruptive technology that will require regulatory guidelines to be put in place to legislate the appropriate use of these tools, guidelines that are on the whole yet to be established. However, it is clear that the need for such regulation has not escaped the healthcare regulators, and, as in other fields, initiatives have been launched to explore the legal aspects surrounding the use of AI tools in healthcare that will clearly be relevant to digestive medicine as well . 2.1. Privacy and Data Management for AI-Based Tools Ensuring the privacy of medical information is increasingly challenging in the digital age. Not only are electronic data easily reproduced, but they are also vulnerable to remote access and manipulation, with economic incentives intensifying cyberattacks on health-related organisations . Breaches of medical confidentiality can have important consequences for patients. Indeed, they may not only be responsible for the shaming or alienation of patients with certain illnesses, but they could even perhaps limit their employment opportunities or affect their health insurance costs. As medical AI applications become more common, and as more data are collected and used/shared more widely, the threat to privacy increases. The hope is that measures such as de-identification will help maintain privacy and will require this process to be adopted more generally in many areas of life. However, the inconvenience associated with these approaches makes this unlikely to occur. Moreover, re-identification of de-identified data is surprisingly easy , and thus, we must perhaps accept that introducing clinical AI applications will compromise our privacy a little. This would be more acceptable if all individuals had the same chance of benefitting from these tools, in the absence of any bias, but at present, this does not appear to be the case (see below). While some progress in personal data protection has been made (e.g., General Data Protection Regulation 2016/79 in the E.U. or the Health Insurance Portability and Accountability Act in the USA: ), further advances with stakeholders are required to specifically address the data privacy issues associated with the deployment of AI applications . The main aim of novel healthcare interventions and technologies is to reduce morbidity and mortality, or to achieve similar health outcomes more efficiently or economically. The evidence favouring the implementation of AI systems in healthcare generally focuses on their relative accuracy compared to gold standards , and as such, there have been fewer clinical trials carried out that measure their effects on outcomes . This emphasis on accuracy may potentially lead to overdiagnosis ; although, this is a phenomenon that may be compensated for by considering other pathological, genomic and clinical data. Hence, it may be necessary to use more extended personal data from EHRs in AI applications to ensure the benefits of the tools are fully reaped and that they do not mislead physicians. One of the advantages of using such algorithms is that they might identify patterns and characteristics that are difficult for the human observer to perceive, and even those that may not currently be included in epidemiological studies, further enhancing diagnostic precision. However, this situation will create important demands on data management, on the safe and secure use of personal information and regarding consent for its use, accentuated by the large amount of quality data required to train and validate DL tools. Traditional opt-in/opt-out models of consent will be difficult to implement on the scale of these data and in such a dynamic environment . Thus, addressing data-related issues will be fundamental to ensure a problem-free incorporation of AI tools into healthcare , perhaps requiring novel approaches to data protection. One possible solution to the question of privacy and data management may come through the emergence of blockchain technologies in healthcare environments. In this sense, recent initiatives into the use of blockchain technology in healthcare may offer possible solutions to some of the problems regarding data handling and management, not least as this technology will facilitate the safer, traceable and efficient handling of an individual’s clinical information . Indeed, the uniqueness of blockchain technology resides in the fact that it permits a massive, secure and decentralized public store of ordered records or events to be established . Indeed, the local storage of medical information is a barrier to sharing this information, as well as potentially compromising its security. Blockchain technology enables data to be carefully protected and safely stored, assuring their immutability . Thus, blockchain technology could help overcome the current fragmentation of a patient’s medical records, potentially benefitting the patient and healthcare professionals alike. Indeed, it could promote communication between healthcare professionals both at the same and perhaps at a different centre, radically reducing the costs associated with sharing medical data . AI applications can benefit from different features of the use of a blockchain, offering trustworthiness, enhanced privacy and traceability. Indeed, when the data used in AI applications (both for training and in general) are acquired from a reliable, secure and trusted platform, AI algorithms will perform better. 2.2. The Issue of Bias in AI Applications Among the most important issues faced by AI applications are those of bias and transferability . Bias may be introduced through the training data employed or by decisions that are made during the design process . In essence, ML systems are shaped by the data on which they are trained and validated, identifying patterns in large datasets that reproduce desired outcomes. Indeed, AI systems are tailor-made, and as such, they are only as good as the data with which they are trained. As such, when these data are incomplete, unrepresentative or poorly interpreted, the end result can be catastrophic . One specific type of bias, spectrum bias, occurs when a diagnostic test is studied in individuals who differ from the population for which the test was intended. Indeed, spectrum bias has been recognized as a potential pitfall for AI applications in capsule endoscopy (CE) , as well as in the field of cardiovascular medicine . Hence, AI learning models might not always be fully valid and applicable to new datasets. In this context, the integration of blockchain-enabled data from other healthcare platforms could serve to augment the number of what would otherwise be underrepresented cases in a dataset, thereby improving the training of the AI application and ultimately, its successful implementation. In real life, any inherent bias in clinical tools cannot be ignored and must be considered before validating AI applications. As a result, overfitting of these models should not be ignored, a phenomenon that occurs when the model is too tightly tuned to the training data, and as a result, it does not function correctly when fed with other data . This can be avoided by using larger datasets for training and by not training the applications excessively, and possibly also by simplifying the models themselves. The way outcomes are identified is also entirely dependent on the data the models are fed. Indeed, there are examples of different pathologies where certain physical characteristics achieve better diagnostic performance, such as lighter rather than darker skin, yet perhaps this is a population that is overrepresented in the training data. Consequently, it is possible that only those with fair skin will fully benefit from such tools . Human decisions may also skew AI tools, such that they may act in discriminatory ways . Disadvantaged groups may not be well-represented in the formative stages of evidence-based medicine , and unless rectified, and human interventions can combat this bias, it will almost certainly be carried over into AI tools. Hence, programmes will need to be established to ensure ethical AI development, such as those contemplated to detect and eliminate bias in data and algorithms . While bias may emerge from poor data collection and evaluation, it can also emerge in systems trained on high-quality datasets. Aggregation bias can emerge from using a single population to design a model that is not optimal for another group . Thus, the potential that bias exists must be faced and not ignored, searching for solutions to overcome this problem rather than rejecting the implementation of AI tools on this basis ( and ). In association with bias, transferability to other settings is a related and significant issue for AI tools . An algorithm trained and tested in one environment will not necessarily perform as well in another environment, and it may need to be retrained on data from the new environment. Even so, transferability is not ensured, and hence, AI tools must be carefully designed, tested and evaluated in each new context prior to their use with patients . This issue also implies there must be significant transparency about the data sources used in the design and development of these systems, with the ensuing demands on data protection and safety. 2.3. The Explainability, Responsibility and the Role of the Clinician in the Era of AI-Based Medicine Another critical issue with regards to the application of DL algorithms is that of explainability and interpretability . When explainable, what an algorithm does and the value it encodes can be readily understood . However, it appears that less explainable algorithms may be more accurate , and thus, it remains unclear if it is possible to achieve both these features at the same time. How algorithms achieve a particular classification or recommendation may even be unclear to some extent to designers and users alike, not least due to the influence of training on the output of the algorithms and that of user interactions. Indeed, in situations where algorithms are being used to address relatively complex medical situations and relationships, this can lead to what is referred to as “black-box medicine”: circumstances in which the basis for clinical decision making becomes less clear . While the explanations a clinician may give for their decisions may not be perfect, they are responsible for these decisions and can usually offer a coherent explanation if necessary. Thus, should AI tools be allowed to make diagnostic, prognostic and management decisions that cannot be explained by a physician ? Some lack of explainability has been widely accepted in modern medicine, with clinicians prescribing aspirin as an analgesic without understanding its mechanism of action for nearly a century . Moreover, it still remains unclear why Lithium acts as a mood stabilizer . If drugs can be prescribed without understanding how they work, then can we not use AI without fully understanding how it reaches a decision? Yet as we move towards greater patient inclusion in their healthcare decisions, the inability of a clinician to fully explain decisions based on AI may become more problematic. Hence, perhaps we are right to seek systems that allow us to trace how conclusions are reached. Moreover, only through some degree of knowledge of AI can physicians be aware of what these tools can actually achieve and when they may be performing irregularly. AI is commonly considered to be of neutral value, neither intrinsically good nor bad, yet it is capable of producing good and bad outcomes. AI algorithms explicitly or implicitly encode values as part of their design , and these values inevitably influence patient outcomes. For example, algorithms will often be designed to prioritise a false-negative rather than false-positive identification, or to perform distinctly depending on the quality of the preparation. While the performance of AI systems would represent a limiting factor for diagnostic success, additional factors will also influence their accuracy and sensitivity, such as the data on which they are trained, how the data are used by the algorithm, and any conscious or unconscious biases that may be introduced. Indeed, the digitalisation of medicine has been said to have shifted the physician’s attention away from the body towards the patient’s data , and the introduction of AI tools runs the risk of further exacerbating this movement. Introducing AI tools into medicine also has implications for the allocation of responsibility regarding treatment decisions and any adverse outcomes based on the use of such tools, as discussed in greater depth elsewhere . At present, there appears to be a void regarding legal responsibility if the use of AI applications produces harm , and there are difficulties in clearly establishing the autonomy and agency of AI . Should any adverse event occur, it is necessary to establish if any party failed in their duty or if errors occurred, attributing responsibility accordingly. Responsibility for the use of the AI will usually be shared between the physician and institution where the treatment was provided, but what of the designers? Responsibility for acting on the basis of the output of the AI will rest with the physician, yet perhaps no party has acted improperly or the AI tool behaved in an unanticipated manner. Indeed, if the machine performs its tasks reliably, there may be no wrongdoing even when it fails. The points in an algorithm at which decisions are made may be complicated to define, and thus, clinicians may be asked to take responsibility for decisions they have not made when using a system that incorporates AI. Importantly, this uncertainty regarding responsibility may influence the trust of a patient in their clinician . Accordingly, the more that clinicians and patients rely upon clinical AI systems, the more that trust may shift away from clinicians toward the AI tools themselves . In relation to the above, the implementation of AI tools may also raise concerns about the role of clinicians. While there are fears that they will be ‘replaced’ by AI tools , the ideal situation would be to take advantage of the strengths of both humans and machines. AI applications could help to compensate for shortages in personnel , they could free up more of a clinicians’ time, enabling them to dedicate this time to their patients or other tasks , or they might enhance the clinician’s capacity in terms of the number of patients they could treat. While decision making in conjunction with AI should involve clinicians, the issue of machine–human disagreement must be addressed . Alternatively, should we be looking for opportunities to introduce fully automated clinical AI solutions? For example, could negative results following AI-based assessment of GI examinations be communicated directly to the patient? While this might be more efficient, it brings into question the individual’s relationship with the clinician. Indeed, the dehumanisation of healthcare may have a detrimental rather than a beneficial effect given the therapeutic value of human contact, attention and empathy . While clinicians may have more time to dedicate to their patients as more automated systems are incorporated into their workflow, they may be less capable to explain AI-based healthcare decision making . Moreover, continued use of AI tools could deteriorate a clinician’s skills, a phenomenon referred to as “de-skilling” , such as their capacity to interpret endoscopy images or to identify less obvious alterations. Conversely, automating workflows may expose clinicians to more images, honing their skills by greater exposure to clinically relevant images, yet maybe at the cost of seeing fewer normal images. In addition, more extended use of automated algorithms may lead to a propensity to accept automated decisions even when they are wrong , with a negative effect on the clinician’s diagnostic precision. Thus, efforts must be made to ensure that the clinician’s professional capacity remains fine-tuned to avoid generating a dependence on automated systems and to avoid any potential loss of skills (e.g., in performing and interpreting endoscopies) when physicians are no longer required to use (the phenomenon of de-skilling has also been dealt with in more detail elsewhere ). Other issues have been raised in association with the clinical introduction of AI applications, such as whether they will lead to greater surveillance of populations and how this should be controlled. Surveillance might compromise privacy but it could also be beneficial, enhancing the data with which the DL applications are trained, so perhaps this is an issue that will be necessary to contemplate in regulatory guidelines. Another issue that also needs to be addressed is the extent to which non-medical specialists such as computer scientists and IT specialists will gain power in clinical settings. Finally, the fragility associated with reliance on AI systems and the potential that monopolies will be established in specific areas of healthcare will also have to be considered . In summary, it will be important to respect a series of criteria when designing and implementing AI-based clinical solutions to ensure that they are trustworthy .
Ensuring the privacy of medical information is increasingly challenging in the digital age. Not only are electronic data easily reproduced, but they are also vulnerable to remote access and manipulation, with economic incentives intensifying cyberattacks on health-related organisations . Breaches of medical confidentiality can have important consequences for patients. Indeed, they may not only be responsible for the shaming or alienation of patients with certain illnesses, but they could even perhaps limit their employment opportunities or affect their health insurance costs. As medical AI applications become more common, and as more data are collected and used/shared more widely, the threat to privacy increases. The hope is that measures such as de-identification will help maintain privacy and will require this process to be adopted more generally in many areas of life. However, the inconvenience associated with these approaches makes this unlikely to occur. Moreover, re-identification of de-identified data is surprisingly easy , and thus, we must perhaps accept that introducing clinical AI applications will compromise our privacy a little. This would be more acceptable if all individuals had the same chance of benefitting from these tools, in the absence of any bias, but at present, this does not appear to be the case (see below). While some progress in personal data protection has been made (e.g., General Data Protection Regulation 2016/79 in the E.U. or the Health Insurance Portability and Accountability Act in the USA: ), further advances with stakeholders are required to specifically address the data privacy issues associated with the deployment of AI applications . The main aim of novel healthcare interventions and technologies is to reduce morbidity and mortality, or to achieve similar health outcomes more efficiently or economically. The evidence favouring the implementation of AI systems in healthcare generally focuses on their relative accuracy compared to gold standards , and as such, there have been fewer clinical trials carried out that measure their effects on outcomes . This emphasis on accuracy may potentially lead to overdiagnosis ; although, this is a phenomenon that may be compensated for by considering other pathological, genomic and clinical data. Hence, it may be necessary to use more extended personal data from EHRs in AI applications to ensure the benefits of the tools are fully reaped and that they do not mislead physicians. One of the advantages of using such algorithms is that they might identify patterns and characteristics that are difficult for the human observer to perceive, and even those that may not currently be included in epidemiological studies, further enhancing diagnostic precision. However, this situation will create important demands on data management, on the safe and secure use of personal information and regarding consent for its use, accentuated by the large amount of quality data required to train and validate DL tools. Traditional opt-in/opt-out models of consent will be difficult to implement on the scale of these data and in such a dynamic environment . Thus, addressing data-related issues will be fundamental to ensure a problem-free incorporation of AI tools into healthcare , perhaps requiring novel approaches to data protection. One possible solution to the question of privacy and data management may come through the emergence of blockchain technologies in healthcare environments. In this sense, recent initiatives into the use of blockchain technology in healthcare may offer possible solutions to some of the problems regarding data handling and management, not least as this technology will facilitate the safer, traceable and efficient handling of an individual’s clinical information . Indeed, the uniqueness of blockchain technology resides in the fact that it permits a massive, secure and decentralized public store of ordered records or events to be established . Indeed, the local storage of medical information is a barrier to sharing this information, as well as potentially compromising its security. Blockchain technology enables data to be carefully protected and safely stored, assuring their immutability . Thus, blockchain technology could help overcome the current fragmentation of a patient’s medical records, potentially benefitting the patient and healthcare professionals alike. Indeed, it could promote communication between healthcare professionals both at the same and perhaps at a different centre, radically reducing the costs associated with sharing medical data . AI applications can benefit from different features of the use of a blockchain, offering trustworthiness, enhanced privacy and traceability. Indeed, when the data used in AI applications (both for training and in general) are acquired from a reliable, secure and trusted platform, AI algorithms will perform better.
Among the most important issues faced by AI applications are those of bias and transferability . Bias may be introduced through the training data employed or by decisions that are made during the design process . In essence, ML systems are shaped by the data on which they are trained and validated, identifying patterns in large datasets that reproduce desired outcomes. Indeed, AI systems are tailor-made, and as such, they are only as good as the data with which they are trained. As such, when these data are incomplete, unrepresentative or poorly interpreted, the end result can be catastrophic . One specific type of bias, spectrum bias, occurs when a diagnostic test is studied in individuals who differ from the population for which the test was intended. Indeed, spectrum bias has been recognized as a potential pitfall for AI applications in capsule endoscopy (CE) , as well as in the field of cardiovascular medicine . Hence, AI learning models might not always be fully valid and applicable to new datasets. In this context, the integration of blockchain-enabled data from other healthcare platforms could serve to augment the number of what would otherwise be underrepresented cases in a dataset, thereby improving the training of the AI application and ultimately, its successful implementation. In real life, any inherent bias in clinical tools cannot be ignored and must be considered before validating AI applications. As a result, overfitting of these models should not be ignored, a phenomenon that occurs when the model is too tightly tuned to the training data, and as a result, it does not function correctly when fed with other data . This can be avoided by using larger datasets for training and by not training the applications excessively, and possibly also by simplifying the models themselves. The way outcomes are identified is also entirely dependent on the data the models are fed. Indeed, there are examples of different pathologies where certain physical characteristics achieve better diagnostic performance, such as lighter rather than darker skin, yet perhaps this is a population that is overrepresented in the training data. Consequently, it is possible that only those with fair skin will fully benefit from such tools . Human decisions may also skew AI tools, such that they may act in discriminatory ways . Disadvantaged groups may not be well-represented in the formative stages of evidence-based medicine , and unless rectified, and human interventions can combat this bias, it will almost certainly be carried over into AI tools. Hence, programmes will need to be established to ensure ethical AI development, such as those contemplated to detect and eliminate bias in data and algorithms . While bias may emerge from poor data collection and evaluation, it can also emerge in systems trained on high-quality datasets. Aggregation bias can emerge from using a single population to design a model that is not optimal for another group . Thus, the potential that bias exists must be faced and not ignored, searching for solutions to overcome this problem rather than rejecting the implementation of AI tools on this basis ( and ). In association with bias, transferability to other settings is a related and significant issue for AI tools . An algorithm trained and tested in one environment will not necessarily perform as well in another environment, and it may need to be retrained on data from the new environment. Even so, transferability is not ensured, and hence, AI tools must be carefully designed, tested and evaluated in each new context prior to their use with patients . This issue also implies there must be significant transparency about the data sources used in the design and development of these systems, with the ensuing demands on data protection and safety.
Another critical issue with regards to the application of DL algorithms is that of explainability and interpretability . When explainable, what an algorithm does and the value it encodes can be readily understood . However, it appears that less explainable algorithms may be more accurate , and thus, it remains unclear if it is possible to achieve both these features at the same time. How algorithms achieve a particular classification or recommendation may even be unclear to some extent to designers and users alike, not least due to the influence of training on the output of the algorithms and that of user interactions. Indeed, in situations where algorithms are being used to address relatively complex medical situations and relationships, this can lead to what is referred to as “black-box medicine”: circumstances in which the basis for clinical decision making becomes less clear . While the explanations a clinician may give for their decisions may not be perfect, they are responsible for these decisions and can usually offer a coherent explanation if necessary. Thus, should AI tools be allowed to make diagnostic, prognostic and management decisions that cannot be explained by a physician ? Some lack of explainability has been widely accepted in modern medicine, with clinicians prescribing aspirin as an analgesic without understanding its mechanism of action for nearly a century . Moreover, it still remains unclear why Lithium acts as a mood stabilizer . If drugs can be prescribed without understanding how they work, then can we not use AI without fully understanding how it reaches a decision? Yet as we move towards greater patient inclusion in their healthcare decisions, the inability of a clinician to fully explain decisions based on AI may become more problematic. Hence, perhaps we are right to seek systems that allow us to trace how conclusions are reached. Moreover, only through some degree of knowledge of AI can physicians be aware of what these tools can actually achieve and when they may be performing irregularly. AI is commonly considered to be of neutral value, neither intrinsically good nor bad, yet it is capable of producing good and bad outcomes. AI algorithms explicitly or implicitly encode values as part of their design , and these values inevitably influence patient outcomes. For example, algorithms will often be designed to prioritise a false-negative rather than false-positive identification, or to perform distinctly depending on the quality of the preparation. While the performance of AI systems would represent a limiting factor for diagnostic success, additional factors will also influence their accuracy and sensitivity, such as the data on which they are trained, how the data are used by the algorithm, and any conscious or unconscious biases that may be introduced. Indeed, the digitalisation of medicine has been said to have shifted the physician’s attention away from the body towards the patient’s data , and the introduction of AI tools runs the risk of further exacerbating this movement. Introducing AI tools into medicine also has implications for the allocation of responsibility regarding treatment decisions and any adverse outcomes based on the use of such tools, as discussed in greater depth elsewhere . At present, there appears to be a void regarding legal responsibility if the use of AI applications produces harm , and there are difficulties in clearly establishing the autonomy and agency of AI . Should any adverse event occur, it is necessary to establish if any party failed in their duty or if errors occurred, attributing responsibility accordingly. Responsibility for the use of the AI will usually be shared between the physician and institution where the treatment was provided, but what of the designers? Responsibility for acting on the basis of the output of the AI will rest with the physician, yet perhaps no party has acted improperly or the AI tool behaved in an unanticipated manner. Indeed, if the machine performs its tasks reliably, there may be no wrongdoing even when it fails. The points in an algorithm at which decisions are made may be complicated to define, and thus, clinicians may be asked to take responsibility for decisions they have not made when using a system that incorporates AI. Importantly, this uncertainty regarding responsibility may influence the trust of a patient in their clinician . Accordingly, the more that clinicians and patients rely upon clinical AI systems, the more that trust may shift away from clinicians toward the AI tools themselves . In relation to the above, the implementation of AI tools may also raise concerns about the role of clinicians. While there are fears that they will be ‘replaced’ by AI tools , the ideal situation would be to take advantage of the strengths of both humans and machines. AI applications could help to compensate for shortages in personnel , they could free up more of a clinicians’ time, enabling them to dedicate this time to their patients or other tasks , or they might enhance the clinician’s capacity in terms of the number of patients they could treat. While decision making in conjunction with AI should involve clinicians, the issue of machine–human disagreement must be addressed . Alternatively, should we be looking for opportunities to introduce fully automated clinical AI solutions? For example, could negative results following AI-based assessment of GI examinations be communicated directly to the patient? While this might be more efficient, it brings into question the individual’s relationship with the clinician. Indeed, the dehumanisation of healthcare may have a detrimental rather than a beneficial effect given the therapeutic value of human contact, attention and empathy . While clinicians may have more time to dedicate to their patients as more automated systems are incorporated into their workflow, they may be less capable to explain AI-based healthcare decision making . Moreover, continued use of AI tools could deteriorate a clinician’s skills, a phenomenon referred to as “de-skilling” , such as their capacity to interpret endoscopy images or to identify less obvious alterations. Conversely, automating workflows may expose clinicians to more images, honing their skills by greater exposure to clinically relevant images, yet maybe at the cost of seeing fewer normal images. In addition, more extended use of automated algorithms may lead to a propensity to accept automated decisions even when they are wrong , with a negative effect on the clinician’s diagnostic precision. Thus, efforts must be made to ensure that the clinician’s professional capacity remains fine-tuned to avoid generating a dependence on automated systems and to avoid any potential loss of skills (e.g., in performing and interpreting endoscopies) when physicians are no longer required to use (the phenomenon of de-skilling has also been dealt with in more detail elsewhere ). Other issues have been raised in association with the clinical introduction of AI applications, such as whether they will lead to greater surveillance of populations and how this should be controlled. Surveillance might compromise privacy but it could also be beneficial, enhancing the data with which the DL applications are trained, so perhaps this is an issue that will be necessary to contemplate in regulatory guidelines. Another issue that also needs to be addressed is the extent to which non-medical specialists such as computer scientists and IT specialists will gain power in clinical settings. Finally, the fragility associated with reliance on AI systems and the potential that monopolies will be established in specific areas of healthcare will also have to be considered . In summary, it will be important to respect a series of criteria when designing and implementing AI-based clinical solutions to ensure that they are trustworthy .
We are clearly at an interesting moment in the history of medicine as we embrace the use of AI and big data as a further step in the era of medical digitalisation. Despite the many challenges that must be faced, this is clearly going to be a disruptive technology in many medical fields, affecting clinical decision making and the doctor–patient dynamic in what will almost certainly be a tremendously positive way. Different levels of automation can be achieved by introducing AI tools into clinical decision-making routines, selecting between fully automated procedures and aids to conventional protocols as specific situations demand. Some issues that must be addressed prior to the clinical implementation of AI tools have already been recognised in healthcare scenarios. For example, bias is an existing problem evident through inequalities in the care received by some populations. AI applications can be used to incorporate and examine large amounts of data, allowing inequalities to be identified and leveraging this technology to address these problems. Through training on different populations, it may be possible to identify specific features of these populations that have an influence on disease prevalence, and/or on its progression and prognosis. Indeed, the identification of population-specific features that are associated with disease will undoubtedly have an important impact on medical research. However, there are other challenges that are posed by these systems that have not been faced previously and that will have to be resolved prior to their widespread incorporation into clinical decision decision-making procedures . Automating procedures is commonly considered to be associated with greater efficiency, reduced costs and savings in time. The growing use of CE in digestive healthcare and the adaptation of these systems to an increasing number of circumstances generates a large amount of information and each examination may require over an hour to analyse. This not only requires the dedication of a clinician or specialist, and their training, but it may increase the chance of errors due to tiredness or monotony (not least as lesions may only be present in a small number of the tens of thousands of images obtained ). DL tools have been developed based on CNNs to be used in conjunction with different CE techniques that aim to detect lesions or abnormalities in the intestinal mucosa . These algorithms are capable of reducing the time required to read these examinations to a question of minutes (depending on the computational infrastructures available). Moreover, they have been shown to be capable of achieving accuracies and results not dissimilar to the current gold standard (expert clinician visual analysis), performances that will most likely improve with time and use. In addition, some of these tools will clearly be able to be used in real time, with the advantages that this will offer to clinicians and patients alike . As well as the savings in time and effort that can be achieved by implementing AI tools, these advances may to some extent also drive the democratization of medicine and help in the application of specialist tools in less well-developed areas. Consequently, the use of AI solutions might reduce the need for specialist training to be able to offer healthcare services in environments that may be more poorly equipped. This may represent an important complement to systems such as CE that involve the use of more portable apparatus capable of being used in areas with more limited access and where patients may not necessarily have access to major medical facilities. Indeed, it may even be possible to use CE in the patient’s home environment. It should also be noted that enhancing the capacity to review and evaluate large numbers of images in a significantly shorter period of time may also offer important benefits in the field of clinical research. Drug discovery programmes and research into other clinical applications are notoriously slow and laborious. Thus, any tools that can help speed up the testing and screening capacities in research pipelines may have important consequences in the development of novel treatments. Moreover, when performing multicentre trials, the variation in the protocols implemented is often an additional and undesired variable. Hence, medical research and clinical trials in particular will benefit from the use of more standardized and less subjective tools. Accordingly, offering researchers the ability to access large amounts of data that have been collected in a uniform manner, even when obtained from different sites, and making it possible to perform medical examinations more swiftly, can only benefit clinical research studies and trials.
In terms of the introduction of AI applications into clinical pipelines, we consider the future to be one of great promise. While it is clear that it will not be seamless and it will require the coordinated effort of many stakeholders, the pot of gold that awaits at the end of the rainbow seems to be getting ever bigger. These applications raise important bioethical issues, not least those related to privacy, data protection, data bias, explainability and responsibility. Consequently, the design and implementation of these tools will need to respect specific criteria to ensure that they are trustworthy . Since these are tools that are breaking new ground, the solutions to these issues may also need to be defined ad hoc, adopting novel procedures. This is an issue that cannot be overlooked as it may be critical to ensure that the opportunities offered by this technology do not slip through our hands.
|
Multicenter analysis on the correlation between the anatomical characteristics of hepatic veins and hepatic venous wedge pressure | ea782514-0d3f-4147-bcc9-01457c6331ff | 11886050 | Surgical Procedures, Operative[mh] | Portal hypertension (PH) is a clinical syndrome often associated with complications such as ascites, gastroesophageal variceal bleeding, hepatic encephalopathy, and portal hypertensive gastropathy in patients with liver diseases. Clinically, the hepatic venous pressure gradient (HVPG) is considered the gold standard for diagnosing PH, with PH indicated when the HVPG exceeds 5 mmHg. However, several recent studies have suggested that the overall correlation between the HVPG and the portal pressure gradient (PPG) is poor when there is hepatic blood circulation shunting, as the HVPG tends to underestimate the PPG in most patients. The core variable determining whether the HVPG accurately represents PPG is the wedged hepatic venous pressure (WHVP). The WHVP reflects the pressure in the hepatic sinusoids. In normal individuals, the free hepatic venous pressure (FHVP) and WHVP are similar because blood flow from the obstructed hepatic vein is dispersed through small vascular channels in the surrounding sinusoidal space, dissipating most of the pressure and preventing a significant increase in WHVP. Compared with PVP, preoperative measurements of the FHVP and WHVP can provide better insights into the location and severity of portal vein blockage in patients with PH, aiding surgeons in treatment planning. The correlation between WHVP and PVP is closely related to underlying liver disease. For example, Thalheimer reported a strong correlation between WHVP and directly measured PVP in patients with alcoholic liver disease and hepatitis B virus, suggesting that WHVP is a reliable alternative measurement. However, in acute PH caused by hepatic sinusoidal obstruction syndrome, WHVP is less accurate in estimating PVP compared to cases of viral or alcohol-related cirrhosis, resulting in an overestimation of PVP. In decompensated nonalcoholic fatty liver disease-related cirrhosis, WHVP predicts PVP less accurately than alcoholic or hepatitis C virus-related cirrhosis does, resulting in an underestimation of PVP. Despite these insights, no large studies have examined the correlation between WHVP and PVP in situations involving anatomical shunting within the liver veins. Therefore, this study aimed to investigate the multivariate impact of different anatomical structures of hepatic venous shunts on WHVP and compare it with PVP. Using linear regression, we explored the relationship between WHVP and PVP and developed a more accurate prediction model for PVP, addressing the challenge of difficult PVP measurements.
Ethics statement Approval of the research protocol: The research protocol adhered to every provision of the Helsinki Declaration and received approval from the Ethics Committees of three hospitals, namely Beijing Shijitan Hospital (Approval No. 2018/01), Fifth Medical Center of Chinese PLA General Hospital (Approval No. KY-2023-12-82-1), Beijing You’an Hospital (Approval No. LL-2023-042-K). Patients This retrospective study collected data from patients who underwent transjugular intrahepatic portosystemic shunt (TIPS) surgery for PH at three hospitals, namely Beijing Shijitan Hospital, Fifth Medical Center of Chinese PLA General Hospital, Beijing You’an Hospital, from January 2020 to June 2024. During the TIPS procedures, hepatic vein balloon occlusion was performed, and measurements of WHVP and PVP were recorded. Written informed consent was obtained from all patients involved in the study. The inclusion criteria were as follows: (1) Patients with PH and underwent TIPS; and (2) Patients who had intraoperative measurements of WHVP and PVP. The exclusion criteria were as follows: (1) Patients with primary and/or secondary liver tumor; (2) Patients with chronic liver failure; (3) Patients with any factors that could alter hepatic hemodynamics, such as previous liver and spleen surgeries and recent use of medications affecting portal venous pressure within one week; (4) Patients with portal vein thrombosis occupying more than 50% of the vessel volume; (5) Patients with abnormalities in the hepatic vein or inferior vena cava; and (6) Patients whom accurate pressure measurements were not possible due to factors such as the bile-cardiac reflex or incomplete balloon occlusion. Measurement of the WHVP and PVP WHVP and PVP were measured according to established standards. Routine disinfection and draping were performed, followed by local anesthesia and puncture through the right internal jugular vein. A catheter was inserted, passing through the brachiocephalic vein, superior vena cava, and right atrium to reach the inferior vena cava and then the hepatic vein. A 5-French Fogarty balloon catheter (manufactured by Edwards Life Sciences LLC, United States) was advanced to the terminal part of the hepatic vein for angiography, aiming to observe the overall anatomical structure of the hepatic vein and reconfirm the position of the balloon. The balloon tip of the catheter was positioned approximately 3-5 cm away from the junction of the hepatic vein and the inferior vena cava, and then balloon dilation was commenced. By injecting 2 mL of normal saline into the balloon, the balloon was gradually inflated. During this process, the pressure exerted by the balloon on the vessel wall as well as the time needed to be strictly controlled. Subsequently, the balloon was inflated to occlude the hepatic vein. While the balloon was dilating and occluding the vessel, 5 mL of contrast medium was injected to evaluate the sealing efficacy of the balloon and the condition of the hepatic vein. After that, the catheter was flushed with normal saline to remove the contrast medium. The FHVP could be measured about 15 seconds after the pressure reading stabilized. The balloon was continuously inflated, and the WHVP could be measured at around 45 seconds. Both FHVP and WHVP were measured three times, and the average values were recorded. If the sealing effect was unsatisfactory, the position of the balloon catheter was adjusted, and the measurements were repeated. After the initial pressure measurement was completed, an innovative angiographic technique identified in our previous research was adopted to measure the WHVP again. After the standard measurement method was finished, the hepatic vein was occluded again by inserting a balloon at the same position as that in the conventional method. The dose of the contrast medium was increased. A high-pressure injector was used to inject a total volume of 15 mL of contrast medium at a stable pressure ranging from 200 to 300 psi, at a rate of 5 mL/second, with continuous fluoroscopy for more than 6 seconds. The WHVP was measured again once the pressure stabilized. Three measurements were taken and the average value was recorded. Meanwhile, the angiographic anatomical structure of the hepatic vein was documented. A RUPS-100 puncture set (Cook Medical, United States) was used to puncture the liver parenchyma through the hepatic vein or the inferior vena cava into the intrahepatic portal vein. After successful puncture, a pigtail catheter was advanced over a guidewire into the portal vein for angiography. The catheter was retracted to the main trunk of the portal vein, and once stable, the main portal venous pressure was measured (three measurements were taken, and the average value was recorded), and the PVP value was documented. Statistical analysis The data were initially subjected to tests for normality and homogeneity of variance via SPSS Statistics, version 20.0 (IBM). Normally distributed continuous variables are expressed as the mean ± SD, while non-normally distributed continuous variables are presented as medians (interquartile ranges). The nonparametric Mann-Whitney U test was used for comparisons between two groups. Pearson’s correlation analysis was used to analyze the relationship between WHVP and PVP, and data visualization was performed via R (version 4.2.1). Furthermore, linear regression analysis was conducted via SPSS 22.0 to establish relationships and prediction models for WHVP and PVP in the presence of different shunt structures, and predictive equations were derived. GraphPad Prism (version 9.5) software was used for data visualization. P < 0.05 was considered to indicate statistical significance.
Approval of the research protocol: The research protocol adhered to every provision of the Helsinki Declaration and received approval from the Ethics Committees of three hospitals, namely Beijing Shijitan Hospital (Approval No. 2018/01), Fifth Medical Center of Chinese PLA General Hospital (Approval No. KY-2023-12-82-1), Beijing You’an Hospital (Approval No. LL-2023-042-K).
This retrospective study collected data from patients who underwent transjugular intrahepatic portosystemic shunt (TIPS) surgery for PH at three hospitals, namely Beijing Shijitan Hospital, Fifth Medical Center of Chinese PLA General Hospital, Beijing You’an Hospital, from January 2020 to June 2024. During the TIPS procedures, hepatic vein balloon occlusion was performed, and measurements of WHVP and PVP were recorded. Written informed consent was obtained from all patients involved in the study. The inclusion criteria were as follows: (1) Patients with PH and underwent TIPS; and (2) Patients who had intraoperative measurements of WHVP and PVP. The exclusion criteria were as follows: (1) Patients with primary and/or secondary liver tumor; (2) Patients with chronic liver failure; (3) Patients with any factors that could alter hepatic hemodynamics, such as previous liver and spleen surgeries and recent use of medications affecting portal venous pressure within one week; (4) Patients with portal vein thrombosis occupying more than 50% of the vessel volume; (5) Patients with abnormalities in the hepatic vein or inferior vena cava; and (6) Patients whom accurate pressure measurements were not possible due to factors such as the bile-cardiac reflex or incomplete balloon occlusion.
WHVP and PVP were measured according to established standards. Routine disinfection and draping were performed, followed by local anesthesia and puncture through the right internal jugular vein. A catheter was inserted, passing through the brachiocephalic vein, superior vena cava, and right atrium to reach the inferior vena cava and then the hepatic vein. A 5-French Fogarty balloon catheter (manufactured by Edwards Life Sciences LLC, United States) was advanced to the terminal part of the hepatic vein for angiography, aiming to observe the overall anatomical structure of the hepatic vein and reconfirm the position of the balloon. The balloon tip of the catheter was positioned approximately 3-5 cm away from the junction of the hepatic vein and the inferior vena cava, and then balloon dilation was commenced. By injecting 2 mL of normal saline into the balloon, the balloon was gradually inflated. During this process, the pressure exerted by the balloon on the vessel wall as well as the time needed to be strictly controlled. Subsequently, the balloon was inflated to occlude the hepatic vein. While the balloon was dilating and occluding the vessel, 5 mL of contrast medium was injected to evaluate the sealing efficacy of the balloon and the condition of the hepatic vein. After that, the catheter was flushed with normal saline to remove the contrast medium. The FHVP could be measured about 15 seconds after the pressure reading stabilized. The balloon was continuously inflated, and the WHVP could be measured at around 45 seconds. Both FHVP and WHVP were measured three times, and the average values were recorded. If the sealing effect was unsatisfactory, the position of the balloon catheter was adjusted, and the measurements were repeated. After the initial pressure measurement was completed, an innovative angiographic technique identified in our previous research was adopted to measure the WHVP again. After the standard measurement method was finished, the hepatic vein was occluded again by inserting a balloon at the same position as that in the conventional method. The dose of the contrast medium was increased. A high-pressure injector was used to inject a total volume of 15 mL of contrast medium at a stable pressure ranging from 200 to 300 psi, at a rate of 5 mL/second, with continuous fluoroscopy for more than 6 seconds. The WHVP was measured again once the pressure stabilized. Three measurements were taken and the average value was recorded. Meanwhile, the angiographic anatomical structure of the hepatic vein was documented. A RUPS-100 puncture set (Cook Medical, United States) was used to puncture the liver parenchyma through the hepatic vein or the inferior vena cava into the intrahepatic portal vein. After successful puncture, a pigtail catheter was advanced over a guidewire into the portal vein for angiography. The catheter was retracted to the main trunk of the portal vein, and once stable, the main portal venous pressure was measured (three measurements were taken, and the average value was recorded), and the PVP value was documented.
The data were initially subjected to tests for normality and homogeneity of variance via SPSS Statistics, version 20.0 (IBM). Normally distributed continuous variables are expressed as the mean ± SD, while non-normally distributed continuous variables are presented as medians (interquartile ranges). The nonparametric Mann-Whitney U test was used for comparisons between two groups. Pearson’s correlation analysis was used to analyze the relationship between WHVP and PVP, and data visualization was performed via R (version 4.2.1). Furthermore, linear regression analysis was conducted via SPSS 22.0 to establish relationships and prediction models for WHVP and PVP in the presence of different shunt structures, and predictive equations were derived. GraphPad Prism (version 9.5) software was used for data visualization. P < 0.05 was considered to indicate statistical significance.
Patient characteristics A flow chart of the study, from initial retrieval to the final study cohort, is shown in Figure . A total of 877 patients (582 males, 295 females) were included in this study. The average age was 52.6 ± 13.0 years (ranging from 14 to 87 years). The etiological classification and main symptoms of PH are shown in Table . The hemodynamic pressure measurements were as follows: The average WHVP was 27.3 ± 9.3 mmHg, and the average PVP was 33.7 ± 7.05 mmHg. Balloon occlusion of the hepatic vein with routine (5 mL contrast agent) imaging After routine injection of 5 mL of contrast medium following hepatic venous balloon occlusion, the collateral display rate was only 25.5%. Specifically, the display rate for the right hepatic vein to the middle hepatic vein collaterals was 88.9%, for right hepatic vein to the accessory hepatic vein collaterals was 7.9%, and that for the right hepatic vein to the portal vein it was 3.2%. The remaining 74.5% did not show any collaterals, portal veins, or minor branches of the hepatic veins. Classification of hepatic venous anatomical variations during balloon occlusion hepatic venography Group A: Hepatic right vein-middle hepatic venous angiography: During venography, another normal hepatic vein was visualized (Figure ). Group B: Hepatic right vein-accessory hepatic venous angiography group: During venography, simultaneous visualization of one or multiple accessory hepatic veins occurred (Figure ). Group C: Hepatic right vein-portal venous angiography: During venography, visualization of the portal vein occurred (Figure ). Group D: Hepatic right vein nonangiography: During venography, no other veins or portal veins were visualized (Figure ). Correlation coefficients between the WHVP and PVP for different types of anatomical hepatic venous communication The relationship between WHVP and PVP in each group was calculated via the nonparametric Mann-Whitney U test (Table and Figure ). Pearson's correlation coefficient between these two pressure values was calculated for each group (Figure ). In Group A, r = 0.588 (95%CI: 0.5-0.7, P < 0.001); in Group B, r = 0.849 (95%CI: 0.8-0.9, P < 0.001); in Group C, r = 0.940 (95%CI: 0.9-1.0, P < 0.001); and in Group D, r = 0.545 (95%CI: 0.4-0.6, P < 0.001). The absolute value of the correlation coefficient represents the degree of correlation, where 0-0.3 represents weak or no correlation, 0.3-0.5 indicates weak correlation, 0.5-0.8 indicates moderate correlation, and 0.8-1 indicates strong correlation. Regression analysis between the WHVP and PVP in different hepatic venous anatomic shunt groups A linear regression model was established with WHVP as the independent variable and PVP as the dependent variable. The model exhibited a good fit, and residual analysis did not reveal any significant outliers or deviations from the model assumptions. All four groups presented a significant positive correlation. Group C exhibited the best regression performance. Additionally, predictive equations for WHVP regarding PVP were derived for each group. The results are presented in Table and Figure .
A flow chart of the study, from initial retrieval to the final study cohort, is shown in Figure . A total of 877 patients (582 males, 295 females) were included in this study. The average age was 52.6 ± 13.0 years (ranging from 14 to 87 years). The etiological classification and main symptoms of PH are shown in Table . The hemodynamic pressure measurements were as follows: The average WHVP was 27.3 ± 9.3 mmHg, and the average PVP was 33.7 ± 7.05 mmHg.
After routine injection of 5 mL of contrast medium following hepatic venous balloon occlusion, the collateral display rate was only 25.5%. Specifically, the display rate for the right hepatic vein to the middle hepatic vein collaterals was 88.9%, for right hepatic vein to the accessory hepatic vein collaterals was 7.9%, and that for the right hepatic vein to the portal vein it was 3.2%. The remaining 74.5% did not show any collaterals, portal veins, or minor branches of the hepatic veins.
Group A: Hepatic right vein-middle hepatic venous angiography: During venography, another normal hepatic vein was visualized (Figure ). Group B: Hepatic right vein-accessory hepatic venous angiography group: During venography, simultaneous visualization of one or multiple accessory hepatic veins occurred (Figure ). Group C: Hepatic right vein-portal venous angiography: During venography, visualization of the portal vein occurred (Figure ). Group D: Hepatic right vein nonangiography: During venography, no other veins or portal veins were visualized (Figure ).
The relationship between WHVP and PVP in each group was calculated via the nonparametric Mann-Whitney U test (Table and Figure ). Pearson's correlation coefficient between these two pressure values was calculated for each group (Figure ). In Group A, r = 0.588 (95%CI: 0.5-0.7, P < 0.001); in Group B, r = 0.849 (95%CI: 0.8-0.9, P < 0.001); in Group C, r = 0.940 (95%CI: 0.9-1.0, P < 0.001); and in Group D, r = 0.545 (95%CI: 0.4-0.6, P < 0.001). The absolute value of the correlation coefficient represents the degree of correlation, where 0-0.3 represents weak or no correlation, 0.3-0.5 indicates weak correlation, 0.5-0.8 indicates moderate correlation, and 0.8-1 indicates strong correlation.
A linear regression model was established with WHVP as the independent variable and PVP as the dependent variable. The model exhibited a good fit, and residual analysis did not reveal any significant outliers or deviations from the model assumptions. All four groups presented a significant positive correlation. Group C exhibited the best regression performance. Additionally, predictive equations for WHVP regarding PVP were derived for each group. The results are presented in Table and Figure .
For patients with PH, determining whether the HVPG can truly represent the PPG, with WHVP as the core variable, holds significant clinical relevance. Investigating the correlation and predictive models between WHVP and PVP is essential. In healthy individuals, liver hemodynamics can self-adjust to maintain normal portal venous pressure within a healthy range. The etiology of liver diseases is diverse, including infections, cirrhosis, intrahepatic and extrahepatic bile duct obstruction, drug toxicity, and alcohol-related conditions. These factors lead to pathological changes such as intrahepatic and extrahepatic bile duct obstruction, liver fibrosis, and cirrhosis in patients, resulting in increased portal vein resistance, blocked blood flow, and elevated portal venous pressure. Cirrhosis progresses from an asymptomatic compensatory phase to decompensated, where complications like esophageal variceal bleeding, ascites, hepatic encephalopathy, and jaundice, significantly affect prognosis. These complications are crucial variables for risk-stratifying and mortality. As cirrhosis progresses, accurate assessment of portal venous pressure becomes essential for implementing therapeutic measures aimed at reducing portal venous pressure, preventing first-time variceal bleeding, and improving liver reserve function. From the perspective of anatomical relationships and hemodynamics of normal liver vasculature, the pressure measured after blocking the hepatic vein is equal to the pressure in the hepatic sinusoids, while direct pressure in the portal vein should be slightly greater than or equal to the hepatic sinusoidal pressure. Therefore, the HVPG is recognized as the gold standard for predicting PPG and serves as a diagnostic tool for PH, predicting liver disease prognosis, assessing drug treatment effectiveness, and predicting the correlation with primary liver cancer. While determination of the HVPG does not require advanced technical expertise or sophisticated hospital equipment, accuracy is paramount. Inaccuracy can directly affect disease staging, treatment selection, prognosis assessment, clinical practice, and scientific research. Consequently, the development of minimally invasive and precise diagnostic technologies remains a focal point and challenge in this field. In the presence of changes in liver hemodynamics, pathophysiology, and anatomical shunting, controversy arises regarding whether the HVPG can still serve as the "gold standard" representative of the PPG. Our research team reported in earlier studies that the overall correlation between the HVPG and PPG in patients with hepatitis B-related cirrhotic PH and autoimmune liver diseases was poor. In most patients, the HVPG cannot accurately represent the PPG. The appearance of hepatic vein collaterals during angiography is a key factor in underestimating HVPG, with earlier collateral appearance leading to more significant underestimation, while the absence of collaterals overestimates the HVPG. When the WHVP is measured, a routine injection of 5 mL of contrast medium after balloon occlusion of the hepatic vein results in a collateral display rate of only 25.5%, primarily revealing collaterals between the right hepatic vein and middle hepatic vein. This fails to fully reflect the hepatic veins and shunting conditions. Since WHVP is the core variable of the HVPG and determines the accuracy of the PVP assessment, in our study involving patients from three hospitals, we employed innovative hepatic vein angiography by increasing the contrast medium dosage from 5 mL to 15 mL, controlling the injection time, and using digital subtraction angiography. We specifically examined the correlation between the WHVP and PVP under different anatomical conditions of the hepatic vein shown by this innovative angiography, establishing a regression prediction model. The results indicate that the correlation and regression of the WHVP and PVP are strongest in patients with right hepatic vein-to-portal vein collaterals; in patients with collaterals between the right hepatic vein and accessory hepatic veins, there is a high correlation and good regression between the WHVP and PVP; even in patients with collaterals between the right hepatic vein and middle hepatic vein or without collaterals, there is a correlation and regression between the WHVP and PVP. All four groups can use a regression model with WHVP to predict PVP, enhancing the accuracy of PVP prediction. These differences are statistically significant ( P < 0.05). An innovative HVPG study on 306 patients (34.9%) revealed intrahepatic venous-venous collateral circulation from the right hepatic vein to the middle hepatic vein. Research suggests that when normal hepatic veins form collaterals with other hepatic veins, there is a relatively large shunt volume. Following balloon occlusion of the hepatic vein, the pressure increase space is minimal as the pressure is diverted and significantly lower than the sinusoidal pressure, leading to significantly lower measured WHVP compared to PVP. On the other hand, 219 patients (25.0%) had intrahepatic venous-venous collateral circulation from the right hepatic vein to the accessory hepatic vein. This scenario involves collateral formation between normal hepatic veins and relatively smaller accessory hepatic veins, resulting in a relatively smaller shunt volume. After balloon occlusion, there is some increase in pressure, but part of the pressure is still diverted, leading to the WHVP being lower than the PVP. In the case of 177 patients (20.2%) in whom the portal vein was visualized, upon balloon occlusion caused the pressure to increase to match the sinusoidal pressure, allowing the contrast agent to reflux into the portal vein, resulting in basic pressure equilibrium where WHVP equaled PVP. In 175 cases (19.9%), no collaterals formed between normal hepatic veins or with other hepatic veins during hepatic vein imaging, and the portal vein did not show contrast. This suggests a significant pressure rise after hepatic vein occlusion due to the absence of decompression channels, resulting in contrast not passing through the sinusoids into the portal vein. This situation mainly occurs in patients with portal vein reflux or significant shunting (splenorenal shunt, gastrorenal shunt, patent umbilical vein, etc. ), possibly due to disrupted sinusoidal flow regulation by high-pressure hepatic arteries, leading to the WHVP being higher than the PVP. The presence and different anatomical types of hepatic vein collaterals significantly affect the correlation and regression between WHVP and PVP. This allows for the estimation of PVP based on the WHVP of different anatomical types of hepatic veins, providing a convenient and effective method for assessing PH. WHVP can directly measure the pressure within the hepatic veins and precisely reflect the pressure status of the hepatic vascular bed, providing a more direct and accurate quantitative assessment of the degree of PH. In contrast, ultrasound and enhanced computed tomography mainly infer the situation of PH through indirect indicators such as observing the morphology of the liver, the diameter of blood vessels, and the blood flow velocity. It is difficult for them to obtain pressure values as accurately as WHVP. The measurement of WHVP can not only reflect the pressure but also, to some extent, embody the overall hemodynamic changes in the liver. However, the measurement of WHVP requires percutaneous puncture of the hepatic vein and placement of a catheter, which is an invasive procedure with numerous risks, such as bleeding, infection, and even life-threatening consequences. Meanwhile, it also has relatively high requirements for the skills of the operating doctors. They need to possess rich experience in interventional operations, proficient catheter manipulation skills, professional angiographic equipment, pressure measurement instruments, and so on. Moreover, there may be operator-dependent variability in obtaining and interpreting WHVP measurement values, which might introduce a certain degree of imprecision into the results. However, this study has several limitations. Larger clinical studies are needed to further validate the relationship between WHVP and PVP. Additionally, variations in etiology and intragroup hepatic vein anatomical structures may affect pressure measurement consistency, necessitating personalized studies to analyze the correlation between WHVP and PVP in different diseases to establish more reliable predictive models. Meanwhile, although the innovative angiography technique used in this paper has improved the collateral display rate of the hepatic veins, whether the use of contrast agents will have a transient impact on the hemodynamics of patients, thereby interfering with the results of pressure measurements, as well as the total amount of contrast agents, injection doses (per second), and injection pressures that are most suitable for the innovative angiography still require further research and discussion. This study suggests that using different predictive models to evaluate PVP based on different anatomical collateral types of hepatic veins can offer a more accurate, convenient, minimally invasive, and personalized approach, potentially providing new perspectives and strategies for the management, diagnosis, and treatment of liver disease patients. With further research and clinical validation, WHVP could become a simpler and more reliable tool for assessing PVP, offering better guidance for patient prognosis and treatment outcomes.
In conclusion, this study established a predictive model and equation for WHVP and PVP in PH patients through correlation and linear regression analyses. The results revealed a correlation and regression relationship between the WHVP and PVP in PH patients, highlighting the significance of hepatic vein collaterals in influencing this relationship. This allows for the estimation of PVP based on the WHVP of different anatomical types of hepatic veins. Despite these limitations, this model has predictive capabilities and promising clinical applications. Future research should focus on refining and validating this model to increase its practicality and reliability in the diagnosis and treatment of PH.
We are grateful for the active cooperation and assistance of the doctors and nurses from the three hospitals, Capital Medical University Affiliated Beijing Shijitan Hospital, Beijing You’an Hospital and department of Diagnosis and Treatment of Hepatic Vascular Disease Center, the Fifth Medical Center of Chinese PLA General Hospital, during the surgical procedures for obtaining research data.
|
An Oral Health Promotion Model Implemented in the Primorje-Gorski Kotar County | a25a61e9-5e63-4ca7-9483-e72c67dae770 | 11857082 | Dentistry[mh] | In the modern world, children’s oral health has great societal and economic meaning. Regardless of the well-known nature of the disease and of its prevention methods, dental caries is currently the most prevalent civilization disease impacting most of the world’s population. Research shows that 60–90% of school children have caries; therefore, it represents a public health issue that requires serious consideration . Each individual’s oral health depends on their oral hygiene habits, diet, economic status, and frequency of dental health care visits . Caries prevention is based on measures and activities conducted in early childhood which, besides continuous fluoride application, high levels of oral hygiene, and adequate changes in diet and lifestyle, include systematic prevention and health education programs . Research on the prevalence of childhood caries conducted in Croatia has been rare and sporadic. However, a trend of decayed, missing and filled teeth for permanent teeth (DMFT) index decline can be observed; in 12-year-old children, it was measured at 7 in 1968 and at 3.5 in 1999 . Findings published in 2014 and 2015 showed a 4.8 DMFT index in fifth-grade students and 4.14 decayed, missing, and filled teeth for primary teeth (dmft) index and a 4.18 DMFT . A study published in 2019 reported a 3.0 DMFT . The lack of data and the decline of dental care quality in early childhood can be traced back to the events of the nineties, when, after the civil war and the creation of the independent Republic of Croatia, new laws and reforms took place. One such reform pertained to dental health care within the Healthcare Act. This reform, introduced in 1994, eliminated pediatric specialist practices from the national health system, which had previously conducted pediatric dentistry and prevention, including monitoring the dmft/DMFT indexes and other oral health indicators. Additionally, during this reform, the choice of the child’s dentist was handed over to the parents, when previously each kindergarten and school had an associated pediatric dentistry practice . Recent locally conducted studies have shown that oral health in Croatia is still neglected and unsatisfactory, and that the awareness of the importance of oral health and its influence on general health is insufficient . It is well-known that dental health needs to be addressed from the earliest age since children with healthy primary teeth have a very high likelihood of having healthy permanent dentition . Guidelines state that pediatric oral health promotion begins even before birth, i.e., through the education of pregnant women on diet, microbe transmission, and oral health—both their own and that of their children. As first teeth appear in the first year of life, it is extremely important that parents are educated on proper diet and the need to delay and minimize the consumption of sugary foods, on the ways of infection transmission and caries development, and on methods of oral cavity cleaning and maintenance. Additionally, in the first year of life, it is important to have the first visit to the dentist; the best time for establishing proper dietary and hygiene habits is between the ages of one and four. From the age of five to ten, primary dentition is being replaced with permanent teeth. During this time, proper oral hygiene should be continued, along with the maintenance of a healthy diet, fluoridation, and more frequent dentist visits, ideally every six months . Considering the guidelines above, it is clear that great strides can be made in maintaining oral health with minimal financial investments. Despite positive examples being available both locally and in neighboring countries, the oral health status of children in Croatia has been neglected for many years, leading to a significant public health issue. Even though pediatric dental health care had previously existed, it was excluded during a health reform in the nineties; this in turn led to parents only taking their children to dental checkups when pain appeared. In order to change this and draw attention to the lack of oral health care, a local group of enthusiasts created an initiative which aimed to highlight the issue and demonstrate that prevention activities can be effective and later be implemented on a national level. Experts from the Teaching Institute of Public Health of Primorje-Gorski Kotar County (PGC), the Clinic for Dental Medicine of the Clinical Hospital Center Rijeka, and the Department of Pediatric Dentistry of the Faculty of Dental Medicine of the University of Rijeka have implemented the “Advancement of oral health in PGC children and youth” Program which is being conducted continuously since 2008 and has ensured the systematic oral health care of children and youth in the PGC. Similar programs have previously shown the usefulness of early detection of caries and oral health needs through the promotion of early prevention, as well as the importance of public health interventions . The aim of this study was to analyze the program data and to assess the efficiency of this type of model in improving children’s oral health through evaluating the dmft/DMFT indexes of the target population, i.e., the students enrolling into first and fifth grades in the PGC from 2008 to 2019. Additionally, we aimed to assess whether there is a need for a similar program on a national level. For the purposes of this research, we used the Program Dental Registry data collected by the Teaching Institute. The data were compiled from individual forms created according to the DMF Klein-Palmer system, on which clearly visible tooth surface cavities were noted as caries, and initial changes in transparency without cavitations were noted as healthy teeth . To assess the response of first- and fifth-grade students to preventive dental exams and any concomitant dmft/DMFT index changes over the subsequent years, we analyzed the data of all the students enrolling into the first (average age: 6) and the fifth elementary school grades (average age: 12) from 2008 to 2019 whose oral health forms had been returned to the Teaching Institute, considering that these are the WHO recommended age groups for monitoring oral health . The oral health data are collected at the Teaching Institute, in a dental database which enables precise monitoring of each tooth and holds accurate information regarding the tooth’s health, potential issues, and previous treatments. Additionally, the database facilitates administrative tasks by retaining basic patient information (name, sex, age, school, and dentist). Within the program, students enrolling in the first elementary grade were handed an oral health form at school which had to then be filled out by their chosen dentist. The filled-out form was given to their school medicine doctor as part of the health documentation for school enrollment. Fifth-grade students had their teeth examined at school by pediatric dentists. The exams were conducted in classrooms with artificial lighting and using single-use mirrors. The same forms as those for first-grade students were used and educational workshops on oral health were conducted. All forms were then returned to the Teaching Institute for data curation and analysis. Prior to the exams, the school provided the children with written consent forms for their parents. Each examined student received a notice for their parents outlining their oral status, the conducted workshops, and recommendations for oral health maintenance. During the school year, nursing graduates from the teams of School and Adolescent Medicine conducted one-hour school workshops for all present students on the importance of oral health maintenance and on adequate teeth brushing with demonstrations. Each student was also given a “For a healthy and pretty smile” brochure. Since the program’s inception, health visitors have educated pregnant and postpartum women on the importance of oral health and handed them with “The health and smile of your children are in your hands” educational brochures. Since 2014, the program has been implemented in PGC kindergartens as well; more intensive preventive oral health workshops were included alongside other daily activities. In kindergartens with adequate conditions, supervised teeth brushing was organized for children aged 3 to 6. After the workshops, the parents were given educational “Care for children’s teeth” brochures and the children were given educational coloring books. Additionally, upon kindergarten enrollment, the parents received instructions regarding the need to choose their child’s dentist. We also analyzed all available program administrative data and reports, namely, the number of children assessed and educated by dentists within the program, the response rate, the number of educational workshops conducted in kindergartens, the number of children included in daily supervised teeth brushing, and the number of insured persons under the age of six. The response rate was calculated by dividing the number of children who returned their oral health forms by the number of all children enrolled into the first and fifth grades. Statistical Analysis The data were analyzed in MedCalc, version 19.1.7 (MedCalc Software Ltd., Ostend, Belgium). The age of the included participants is shown as median and range (min-max). Categorical data are shown through absolute and relative frequencies. The differences between frequencies have been calculated with the Chi-square test, and as a post hoc analysis, the proportion t -test was used. The trend was calculated using the Chi-square test for trend. Variance analysis (ANOVA) was used to determine the difference between dmft/DMFT indexes by age and sex, whereas the Scheffé test was used as a post hoc analysis. The significance level was set at p < 0.05 for all analyses. The use of deidentified patient registry data for the purposes of this research was requested from the Teaching Institute. The research was approved by the Ethics Committee of the Teaching Institute of Public Health of the PGC (approval number: 07-700/147-23, 3 November 2023), and was conducted in accordance with the tenets of the 2008 Declaration of Helsinki and its 2013 amendment. The data were analyzed in MedCalc, version 19.1.7 (MedCalc Software Ltd., Ostend, Belgium). The age of the included participants is shown as median and range (min-max). Categorical data are shown through absolute and relative frequencies. The differences between frequencies have been calculated with the Chi-square test, and as a post hoc analysis, the proportion t -test was used. The trend was calculated using the Chi-square test for trend. Variance analysis (ANOVA) was used to determine the difference between dmft/DMFT indexes by age and sex, whereas the Scheffé test was used as a post hoc analysis. The significance level was set at p < 0.05 for all analyses. The use of deidentified patient registry data for the purposes of this research was requested from the Teaching Institute. The research was approved by the Ethics Committee of the Teaching Institute of Public Health of the PGC (approval number: 07-700/147-23, 3 November 2023), and was conducted in accordance with the tenets of the 2008 Declaration of Helsinki and its 2013 amendment. From 2008 to 2019, 53,667 children were enrolled into the first and fifth grades: 27,323 into the first grade, and 26,344 into the fifth. Of them, 44,422 were examined; 21,714 first-grade students, median age 6 (range: 5–9) and 22,708 fifth-grade students, median age 12 (range: 10–15). The response rate was 86.2% for first and 79.4 % for fifth graders, and 82.8% for both combined. In the first grade, 10,567 boys and 10,046 girls (20,631 in total) and in the fifth, 10,885 boys and 10,112 girls (20,997 in total) were examined. In both grades combined, there were 21,452 boys (51.5%) and 20,176 (48.5%) girls; there was no statistically significant difference between the two ( p = 0.204). Due to incomplete sex data, the data presented above were used in analyses that considered sex. The program is currently being conducted in 90 kindergartens in the PGC area; 98% of kindergartens have adequate conditions and conduct teeth brushing once a day. During the observed period, 2336 workshops were conducted, and included 30,496 preschool children. From 2008 to 2019, 1240 program workshops were conducted in the first grades and 21,714 students were assessed by dentists, and 1015 workshops were conducted with 22,708 students assessed in the fifth grades; in the same time period, health visitors educated 26,559 women. shows the measures of central tendency for the dmft index of first graders by years and the significant dmft differences by research years. The trend test showed that there was a statistically significant difference between groups in a certain direction (χ = 334.15, p < 0.001), i.e., that the dmft index diminished over time. Using the variance analysis, it was found that there was a significant difference in the dmft indexes (F[11, 21,702] = 9.75, p < 0.001) and by using the post hoc Scheffé test it was found that the dmft index was greater in the first program years than in the final years, as shown in . The means, standard deviations, and ranges of the DMFT indexes of fifth-grade students are shown in . The means, standard deviations, and ranges of the DMFT indexes of fifth-grade students are shown in . The differences between the dmft indexes of first-grade students and the DMFT indexes of the fifth-grade students were also calculated and are shown in . The arithmetic mean of the dmft of first graders was 3.86 (standard deviation [SD] = 3.99) and that of the fifth-grade DMFT was 1.36 (SD = 1.70). The t -test showed that the first-grade students had a significantly higher dmft than the fifth grade DMFT ( p < 0.01). The oral health indicator for first-grade children, the dmft, declined from 4.66 to 3.73 and that for fifth-grade students, DMFT, from 2.50 to 1.00. With the two-way ANOVA analysis, we found a statistically significant difference by sex, with girls having a lower dmft (F[3, 41,624] = 5502, p < 0.001) and a lower DMFT (F[3, 41,624] = 5805, p < 0.001) than boys. We found an increasing trend in the number of insured people using pediatric dental health care since birth until six years of age (χ = 137.27; p < 0.001) , despite the continuous drop in birthrate . From the total number of kindergarten-age preschool children (3–6 years old) in the PGC (N = 5693 children) who were included in the program, we found that 50% brushed their teeth daily. According to the 2008–2019 program data, in the PGC first-grade students the dmft index decreased from 4.66 to 3.73, whereas the DMFT index in fifth-grade students decreased from 2.50 to 1.00. From 2011 to 2015, the dmft index in first graders was continuously within the 4.17–4.23 range. With the program expansion in 2014 and the commencement of continuous, comprehensive, and systematic preventive activities in preschool institutions, an improvement in oral health indicators can be seen in first graders—the dmft was 4.0 in 2016, 3.8 in 2017 and 2018, and 3.7 in 2017. With this, we have shown that prevention programs that are intensely conducted in preschool establishments and are based on teeth brushing monitoring have a positive impact on the oral health of elementary school children, as seen in previous studies . Additionally, the obtained findings show a DMFT reduction, i.e., an improvement in oral health status in fifth graders as well; as such, we can compare ourselves to other countries with a low DMFT index, namely Italy (1.1), Spain (1.07), Sweden (1.1), and Denmark and Switzerland (0.9) . The dmft/DMFT decreases can be attributed to the main and additional activities of the program, namely, the preventive checkups, higher oral preventive awareness and more common prevention advice given by pediatricians and health visitors, earlier first dental visits, supervised teeth brushing in kindergartens, and preventive workshops. Our study had several limitations. Firstly, the program inclusion was voluntary, therefore, we could only assess the dmft/DMFT indexes of children whose parents returned the filled-out forms to the Institute. Secondly, as there were no comparable programs that have been conducted locally, we were unable to compare our findings to those of our neighboring counties or other countries. Thirdly, we were unable to measure the influence of supervised teeth-brushing and the number of parents and children advised by pediatricians and health visitors. Aside from the program activities, it is possible that the dmft/DMFT index decrease was also influenced by various media and social network influences; however, this was not assessed in this study. Such a decrease would also be expected if the availability of sugary foods was decreased; however, no national policies have been implemented regarding sugar contents in food nor have there been any restrictions regarding what is available to children. Additionally, there has been no new evidence regarding positive changes in children’s overall diets. The findings of a 2022 meta-analysis showed that there have been few studies on oral hygiene education worldwide; from the 18 in total, 2 have been conducted in Europe and only 3 have been conducted according to the WHO-outlined concept, which gives additional significance to the quality of our program considering the findings we obtained . Aside from working with children, the program also motivates the parents to choose a dentist as soon as possible. Seeing that Croatia does not have organized oral health care for children, this responsibility is left to the parents, who oftentimes do not choose a dentist until caries-related issues manifest (pain, abscesses, etc.). The program ensured that most children enrolling in the first grade have a chosen dentist, which was not the case prior. By choosing a dentist, the child is included in the dental health care system, thus, changing the practice of dentists only being visited in dire need and imposing the option of preventive exams. The program model of having dentists visit schools has proven beneficial, similar to several studies which have shown that children included in school dental care programs are more motivated to visit their dentist and are more regular in their checkups and dental health care procedures . The number of pediatric preventive dental exams has been increasing since the program’s inception, especially after the inclusion of kindergarten children in 2014. The prevention activities have also increased the parents’ awareness of the need to visit the dentist early, which can be seen in the increasing trend of insured people under the age of six. In the conduction of the program, health visitors were also included, educating pregnant and postpartum women on the importance of oral health maintenance, as they are especially motivated to maintain their children’s health. Being that in the early program years the DMFT index gradually declined and the dmft remained the same, it was necessary to commence more intensive prevention activities aimed at preschool children. Therefore, the program was expanded in 2014 by conducting workshops more intensively with kindergarteners and involving pediatricians in 2019 to cover all preschool children. The aim was that pediatricians, by following international, European, and Croatian guidelines, would send the children on their first dental checkup between 6–12 months of age, to ensure proper hygiene and feeding from the earliest age . The program showed an improvement in children’s oral health through a reduction of the dmft and DMFT indexes. A further decrease at a similar rate is expected in the future, with a predicted variation during the COVID-19 pandemic period. Various stakeholders have been important in the conduction of this program, each acting directly or indirectly within their domain for a common goal. Through their mobilization and interconnection, additional financial strains on the system can be avoided . The sustainability of this model is demonstrated by the fact that it has been in place for over 15 years and has yielded measurable results; it has also proven easily modifiable. Dental health care has been made available through a proactive approach which encompasses children from before birth to age 18. By forming local clinics, organizing educational activities, systematically involving the parents, and monitoring and adjusting the program according to the response rate, free dental health care has been provided. The program has also achieved the aim of influencing national policies—it has been the model for the creation of two national oral health care programs: in 2017, the “Dental Passport” program for schools, and in 2018 the “National Standards for Supervised Toothbrushing in Kindergartens and Primary Schools” program. The subsequent program development and research will focus on more detailed data monitoring as well as on data comparison with similar local and national programs. Considering the possibilities of caries prevention and the advantages of early caries lesion diagnostics, it is necessary to establish preventive activities in kindergartens and schools. Conducting these types of programs has great importance in encouraging children and parents to visit their dentists. Education on adequate oral hygiene, dietary habits, and oral disease prevention enables children to develop lifelong habits of maintaining their oral health. Here, by showing a reduction of the dmft/DMFT indexes on a local level, we have demonstrated that this type of program needs to be implemented at a national level and that there is an urgent need for a comprehensive national pediatric oral healthcare strategy. |
Deciphering single-cell gene expression variability and its role in drug response | 8155cdfa-ecba-446a-99f9-2a4bee68836b | 11578114 | Pharmacology[mh] | Precision medicine, a revolutionary approach that acknowledges individual variability in drug response, has gained considerable attention in recent years owing to its potential to improve the effectiveness of drug treatments. This approach recognizes the inherent diversity in individual responses to drug therapies, a variability deeply rooted in genetic differences . In the pursuit of understanding the distinctive variations in drug response among individuals, research efforts are directed toward pharmacogenes , genes within an individual’s genome that profoundly influence their response to medications. These genes encode proteins involved in drug action, toxicity, transport, or metabolism, all of which play a pivotal role in determining drug efficacy and safety . For instance, CYP2D6 is responsible for the metabolism of about 20% of commonly prescribed drugs across various medical fields, including psychiatry, pain management, and cardiology . Individuals can be poor, intermediate, extensive, or ultra-rapid metabolizers based on their CYP2D6 genotypes . Additionally, P-glycoprotein (ABCB1) plays a key role in drug transport, particularly in expelling anticancer drugs from cells, thereby contributing to multidrug resistance in cancer . Variants in the ABCB1 gene can lead to differences in drug absorption and bioavailability. For example, certain variants might reduce the effectiveness of drugs by increasing their efflux from cells, leading to lower intracellular concentrations . Genetic polymorphisms within pharmacogenes have been recognized as a significant contributor to the variability in drug response . The exploration of genetic variants extends beyond coding regions to encompass regulatory elements such as promoters, enhancers, and microRNA binding regions . Notably, there is a particular focus on the expression Quantitative Trait Loci (eQTLs) that influence the expression levels of pharmacogenes . Genetic variations in these pharmacogenes can give rise to differences in drug metabolism, absorption, distribution, and target interactions, which in turn can result in varying therapeutic outcomes and the potential for adverse drug reactions. It is not surprising that studying the expression variation of pharmacogenes directly, in addition to their related genetic variants, can still give us essential information for predicting drug responses. Simonovsky et al. developed the local coefficient of variation (LCV) as an analytical tool to probe the relationship between gene expression variability and drug efficacy, utilizing bulk RNA-seq data. Their findings reveal that drugs targeting genes with high across-individual variability in expression often exhibit reduced effectiveness within the broader population. This study underscores the importance of considering gene expression variability in medication design. Expanding upon these foundational insights, we extend our analysis to leverage single-cell RNA sequencing (scRNA-seq) data. Recent advancements in scRNA-seq technologies and their related analysis tools have opened new horizons in understanding gene expression variability at an unprecedented resolution . Our primary aim is to explore the variability in the expression of pharmacogenes, genes directly involved in drug response, at the cellular level. More precisely, we aim to dissect and decipher the LCV of these crucial pharmacogenes across a myriad of cell types within eight distinct human tissues. Our analysis has unveiled a plethora of interesting discoveries. First, we have uncovered high expression variation among pharmacogenes, not only between different individuals but also between different cells of the same individual. Such variation is usually consistently high for cells across different cell types of the same tissue or cells across different tissues of the same cell type. Additionally, we have investigated the correlation between the LCV of pharmacogenes and the efficacy of associated drugs. Our results align with previous findings, demonstrating a negative correlation between cross-individual expression variability of pharmacogenes and drug efficacy. Finally, we explore the potential of integrating cross-cell and cross-individual LCV data to predict drug efficacy, highlighting that the expression variability of pharmacogenes may be a pivotal contributor to the observed variability in drug response, even within a given tissue microenvironment. In essence, our research illuminates the complexity between gene expression heterogeneity and drug response, bringing us one step closer to the era of truly personalized medicine. Pharmacogenes are generally more variable than non-pharmacogenes across cells Pharmacogenes have been reported to exhibit higher cross-individual expression variability than other protein-coding genes . To test whether pharmacogenes’ expression variability is also high across different cells of the same individuals, we obtained the snRNA-seq data across 15 944 cells from eight tissues and sixteen donors . We utilized the local coefficient of variation (LCV) as a metric for assessing expression variability. To obtain the cross-cell LCV of each gene, we calculated the LCV for each cell type by averaging the LCV values from the sixteen donors. The overall tissue-level LCV for that gene was then obtained by averaging these cell-type-level LCVs. As shown in , except for skin tissues, pharmacogenes consistently demonstrate significantly higher cross-cell variability compared to non-pharmacogenes ( P values < 0.05, T-tests). Our own analysis of the population-level GTEx data further corroborated that pharmacogenes typically exhibit increased expression variability across different individuals when compared to non-pharmacogenes (as shown in , P value < 0.001, T-tests). It is worth noting that the esophagus tissue in the bulk data corresponds to esophagus mucosa and esophagus muscularis in the snRNA-seq data. To distinguish different types of pharmacogenes and explore potential differences in their LCV patterns compared to non-pharmacogenes, we categorized pharmacogenes into three main functional groups: “regulation,” “transport,” and “metabolism.” In our analysis of cross-cell LCVs, we found significant differences between all three groups of pharmacogenes and non-pharmacogenes in six out of eight tissues studied, excluding breast and skin tissues. Specifically, the “regulation” group exhibited additional significant differences between pharmacogenes and non-pharmacogenes in breast tissue. Across all tissues, our analysis of cross-individual LCVs consistently showed significant differences between all three groups of pharmacogenes and non-pharmacogenes. Detailed plots illustrating these findings can be found in and . We conducted a more detailed investigation into the correlation between cross-cell and cross-individual LCVs for pharmacogenes for the eight different tissues. As shown in , all tissues exhibit a significant positive correlation between the two types of variability measurements. This suggests that there might be shared underlying mechanisms or factors contributing to the expression variability of pharmacogenes within these particular tissue contexts, both at the intra-individual and inter-individual levels. The positive correlation suggests that factors affecting variability within individual cells also contribute to variability across different individuals. Conversely, a lack of correlation would imply that high cross-individual variability is driven by population-specific factors like genetic diversity or environmental influences, which might not manifest uniformly within individual cells. Variability of pharmacogenes at the cell-type level Analyzing variability patterns of pharmacogenes across cells of the same cell type and tissue, our study found that pharmacogenes still demonstrate higher variability than non-pharmacogenes. As an illustration, in the esophagus mucosa tissue , six out of seven cell types exhibit a higher cross-cell LCV for pharmacogenes compared to non-pharmacogenes. This contrast in variability is particularly evident in three epithelial cell types, namely basal, squamous, and suprabasal cells, as well as in endothelial vascular cells (P value < 0.001, T-tests). Moreover, our exploration extends to other tissues ( and ). In six out of seven cell types of the heart and all eight cell types of the prostate, pharmacogenes exhibit significantly higher LCV values than non-pharmacogenes. The extended scope of our analysis further solidifies our findings. provides a comprehensive visual representation of our observations, presenting the T-test p-values that compare pharmacogene and non-pharmacogene expression variability across all cell types found in the eight distinct tissues. More than 75% (19 out of 25) of cell types show pronounced pharmacogene variability vs. non-pharmacogenes in at least one tissue. For each pharmacogene, we calculated the range in LCV across different cell types within identical tissues. As depicted in , pharmacogenes demonstrated a comparable or even narrower range (observed in skin and breast tissues) in their LCVs compared to non-pharmacogenes. Notably, the LCVs of pharmacogenes exhibit similarity between pharmacogenes and non-pharmacogenes in the skin tissue . For the breast tissue, pharmacogenes show elevated LCVs in contrast to non-pharmacogenes . This heightened LCV is consistently observed across different cell types within the breast, resulting in a reduced overall range . Consistent variability patterns of pharmacogenes in the same cell types across different tissues We analyzed the LCV distribution for specific cell types across different tissues. Our selection criteria included only those cell types found in more than three tissue types from our dataset, including “Endothelial cell (vascular),” “Fibroblast,” and “Adipocyte.” For each pharmacogene, we computed the range of LCV across different tissues of the same cell type. Our results yielded statistically significant differences in the distribution of LCV ranges between pharmacogenes and non-pharmacogenes ( , P -value < 0.05 for Endothelial cells and Adipocytes, T-tests). Notably, pharmacogenes displayed a lower range of LCV compared to non-pharmacogenes across various tissues. This pattern suggests a consistently high expression variation for pharmacogenes across varied tissue environments within specific cell types. Thus, compared to non-pharmacogenes, pharmacogenes tend to exhibit variation across different cells of the same cell type (see and B). Moreover, they consistently display such high variation across different cell types within the same tissue environment and across different tissues of the same cell type . Distribution of pharmacogenes’ LCV values across different cell types and different tissues illustrates how pharmacogenes distribute across tissue cell types based on their peak LCV (largest local coefficient of variation), with LCV values averaged across different individuals. Let N ij denote the count of pharmacogenes that exhibit their maximal LCV within a cell type i of tissue j . The average of N ij across all possible combinations of i and j is 14.78. Cell types within the skin (including epithelial cells, sebaceous cells, and unknown types) and the heart (including endothelial cells, immune, fibroblasts, and adipocytes) show N ij values exceeding the average of 14.78. Conversely, all cell types in tissues like the lung and prostate have N ij values below this average. This pattern highlights the diversity in LCV distribution across various cell types and tissues. The heatmaps in and C display the correlation between LCV values among different cell types and tissues (averaged across multiple individuals) using hierarchical clustering for pharmacogenes and non-pharmacogenes, respectively. demonstrates that, while most cell types exhibit a low positive correlation with each other, a notably high correlation is observed between heart and prostate myocytes. Additionally, myocytes and epithelial cells exhibit a stronger correlation with each other compared to other cell types. These findings provide valuable insights into the relationships between different cell types and tissues regarding the variability of pharmacogenes. Conversely, reveals that the correlations for non-pharmacogenes are predominantly weak, showing no significant trends. Drug efficacy is negatively correlated with the variability LCV of pharmacogenes Previous research conducted by Simonovsky et al. highlighted a negative correlation between the variability of pharmacogenes across individuals (i.e. cross-individual variability) and drug efficacy using bulk RNA-seq data. In our study, we aimed to delve deeper into this correlation for cross-cell variability by focusing on cell-level LCV obtained from single-nucleus RNA sequencing (snRNA-seq). To investigate the relationship between pharmacogenes’ cross-cell variability and drug efficacy, we computed the weighted average LCV of each drug’s target genes (details in Materials and methods). Specifically, we extracted a drug’s target genes using the DGIdb database . For each pharmacogene gene, we calculated the LCV across all cell types of a tissue and for each donor individually, then averaged these donor-specific cell-type level LCVs. The maximum averaged LCV across different cell types of a particular tissue was chosen as its representative value for subsequent analyses. Next, we used the interaction score for each gene-drug pair as the weight to compute the weighted average LCV. Similarly, we calculated the cross-individual LCV of pharmacogenes based on GTEx tissue bulk RNA-seq data and obtained the weighted average for each gene-drug pair. Consequently, for every drug and every considered tissue, there exists a corresponding cross-cell pharmacogene LCV ( C k ) and a cross-individual pharmacogene LCV ( [12pt]{minimal} ${I}_k$ ). The analysis was conducted separately for each tissue. Our findings reveal a consistent negative correlation between drug efficacy and the LCV of pharmacogenes. As demonstrated in , six out of eight tissues exhibit a negative correlation between drug efficacy and cross-cell pharmacogene LCVs. Meanwhile, a negative correlation was observed for all considered seven tissues between drug efficacy and cross-individual LCVs . In essence, drugs targeting genes with higher LCV values tend to exhibit lower efficacy. The statistical significance of this negative correlation was confirmed for the cross-cell variability in the skeletal muscle and lung tissue (correlation = −0.149 or −0.256, P = 0.04 or 0.002, one-tailed Spearman’s tests) and for the cross-individual variability in the esophagus and heart tissues (correlation = −0.182 or −0.287, P = 0.0005 or 0.02, one-tailed Spearman’s tests). These results underscore the significance of gene variability in understanding drug efficacy at both the individual and population levels across various tissues. Enhanced drug efficacy prediction through joint consideration of cross-cell and cross-individual LCV As drug efficacy is influenced by both cross-cell LCV and cross-individual LCV, we embarked on an exploration to assess whether the combination of these two variables could enhance our predictive capabilities. To accomplish this, we first formulated multiple linear regression models utilizing various combinations of LCV features (see Materials and methods for details). We selected linear regression for its straightforward interpretability of coefficients and simplicity. Given our relatively small sample size of drugs, linear regression is less likely to overfit compared to more complex models. When we exclusively utilized tissue-level cross-individual LCV features to predict drug efficacy (model 1), the resulting adjusted R-squared value was a mere 0.043. Alternatively, focusing solely on tissue-level cross-cell LCV features (model 2) yielded a somewhat improved adjusted R-squared of 0.074. However, it was when we jointly considered both cross-individual and cross-cell LCV features (model 3) that our model demonstrated substantial improvement, achieving an adjusted R-squared of 0.121. Notably, several predictors ( P < 0.05) emerged as significant contributors to this enhanced prediction, encompassing cross-individual LCV features in the esophagus ( P = 0.02) and heart ( P = 0.007) tissues, along with the cross-cell LCV feature in the lung ( P = 0.003). Furthermore, we explored an approach that integrates tissue cell-type-level LCVs ( [12pt]{minimal} ${T}_k$ , computed across cells belonging to the same cell type within a specific tissue) with cross-individual LCVs (model 4). Remarkably, this approach exhibited superior predictive power, resulting in an adjusted R-squared of 0.214. Among the significant predictors ( P < 0.05) were cross-cell LCV features for breast epithelial cells (luminal), prostate epithelial cells (Hillock), prostate fibroblasts, and lung epithelial cells (alveolar type II). None of the cross-individual LCV terms stood out. To further investigate the relationship between drug efficacy and the LCVs in target tissues, we focused on two distinct sets of drugs targeting the heart (including Amiodarone, Digoxin, Diltiazem, Disopyramide, Dofetilide, Dronedarone, Flecainide, Lidocaine, Propafenone, and Sotalol) and lung (including Aminophylline, Arformoterol, Montelukast, Pseudoephedrine, Salbutamol (Albuterol), Theophylline, Tiotropium, Zafirlukast, Zileuton, and Levofloxacin), respectively. For each drug set, we developed simple linear regression models using tissue-level cross-cell LCV ( [12pt]{minimal} ${C}_k$ ) and cross-individual LCV ( [12pt]{minimal} ${I}_k$ ) calculated from each tissue. For both drug sets, the models trained on the LCVs of the target tissues consistently ranked among the top three performers. Specifically, for heart-targeting drugs, the top three models performed best with features from the heart (adjusted R 2 = 0.379), breast (adjusted R 2 = 0.261), and skin (adjusted R 2 = 0.143). Similarly, for lung-targeting drugs, the top three models performed best with features from the breast (adjusted R 2 = 0.229), heart (adjusted R 2 = 0.211), and lung (adjusted R 2 = 0.117). To validate our findings, we repeated the same analysis by randomly selecting the same number of drugs as in the tissue-specific drug sets. We repeated this process 1000 times and calculated the mean adjusted R 2 . Notably, the results from the random selection demonstrated significantly lower performance, with a mean adjusted R 2 of 0.094 for models trained on heart features and 0.064 for models trained on lung features. Linear regression assumes a linear relationship between LCV features and drug efficacy. This assumption may oversimplify complex biological relationships, potentially leading to an incomplete representation of underlying patterns in the data. To address this, we subsequently employed a random forest machine learning model, utilizing the cell-type-level LCVs ( [12pt]{minimal} ${}_{}$ ) and cross-individual LCVs ( [12pt]{minimal} ${}_{}$ ) from model 4, to explore the potential nonlinear relationship between pharmacogene expression variability and drug efficacy. presents scatter plots that illustrate the relationships between drug relative efficacy and the top five features based on the highest node purity (i.e. how well a node separates samples of the same class from those of different classes). These include four cross-cell LCV features for heart endothelial cells (vascular), esophagus mucosa fibroblasts, lung epithelial cells (alveolar type I), lung epithelial cells (alveolar type II), and cross-individual LCV for the heart. A LOWESS line was incorporated to more accurately capture and illustrate the underlying negative trends in the data. Complementing this, presents an incMSE (‘increase in mean squared error’) that measures the improvement in prediction accuracy achieved by a feature) plot, highlighting the relative importance of the top three features: all are cross-cell LCV features for esophagus mucosa fibroblasts, lung epithelial cells (alveolar type I and II). Pharmacogenes have been reported to exhibit higher cross-individual expression variability than other protein-coding genes . To test whether pharmacogenes’ expression variability is also high across different cells of the same individuals, we obtained the snRNA-seq data across 15 944 cells from eight tissues and sixteen donors . We utilized the local coefficient of variation (LCV) as a metric for assessing expression variability. To obtain the cross-cell LCV of each gene, we calculated the LCV for each cell type by averaging the LCV values from the sixteen donors. The overall tissue-level LCV for that gene was then obtained by averaging these cell-type-level LCVs. As shown in , except for skin tissues, pharmacogenes consistently demonstrate significantly higher cross-cell variability compared to non-pharmacogenes ( P values < 0.05, T-tests). Our own analysis of the population-level GTEx data further corroborated that pharmacogenes typically exhibit increased expression variability across different individuals when compared to non-pharmacogenes (as shown in , P value < 0.001, T-tests). It is worth noting that the esophagus tissue in the bulk data corresponds to esophagus mucosa and esophagus muscularis in the snRNA-seq data. To distinguish different types of pharmacogenes and explore potential differences in their LCV patterns compared to non-pharmacogenes, we categorized pharmacogenes into three main functional groups: “regulation,” “transport,” and “metabolism.” In our analysis of cross-cell LCVs, we found significant differences between all three groups of pharmacogenes and non-pharmacogenes in six out of eight tissues studied, excluding breast and skin tissues. Specifically, the “regulation” group exhibited additional significant differences between pharmacogenes and non-pharmacogenes in breast tissue. Across all tissues, our analysis of cross-individual LCVs consistently showed significant differences between all three groups of pharmacogenes and non-pharmacogenes. Detailed plots illustrating these findings can be found in and . We conducted a more detailed investigation into the correlation between cross-cell and cross-individual LCVs for pharmacogenes for the eight different tissues. As shown in , all tissues exhibit a significant positive correlation between the two types of variability measurements. This suggests that there might be shared underlying mechanisms or factors contributing to the expression variability of pharmacogenes within these particular tissue contexts, both at the intra-individual and inter-individual levels. The positive correlation suggests that factors affecting variability within individual cells also contribute to variability across different individuals. Conversely, a lack of correlation would imply that high cross-individual variability is driven by population-specific factors like genetic diversity or environmental influences, which might not manifest uniformly within individual cells. Analyzing variability patterns of pharmacogenes across cells of the same cell type and tissue, our study found that pharmacogenes still demonstrate higher variability than non-pharmacogenes. As an illustration, in the esophagus mucosa tissue , six out of seven cell types exhibit a higher cross-cell LCV for pharmacogenes compared to non-pharmacogenes. This contrast in variability is particularly evident in three epithelial cell types, namely basal, squamous, and suprabasal cells, as well as in endothelial vascular cells (P value < 0.001, T-tests). Moreover, our exploration extends to other tissues ( and ). In six out of seven cell types of the heart and all eight cell types of the prostate, pharmacogenes exhibit significantly higher LCV values than non-pharmacogenes. The extended scope of our analysis further solidifies our findings. provides a comprehensive visual representation of our observations, presenting the T-test p-values that compare pharmacogene and non-pharmacogene expression variability across all cell types found in the eight distinct tissues. More than 75% (19 out of 25) of cell types show pronounced pharmacogene variability vs. non-pharmacogenes in at least one tissue. For each pharmacogene, we calculated the range in LCV across different cell types within identical tissues. As depicted in , pharmacogenes demonstrated a comparable or even narrower range (observed in skin and breast tissues) in their LCVs compared to non-pharmacogenes. Notably, the LCVs of pharmacogenes exhibit similarity between pharmacogenes and non-pharmacogenes in the skin tissue . For the breast tissue, pharmacogenes show elevated LCVs in contrast to non-pharmacogenes . This heightened LCV is consistently observed across different cell types within the breast, resulting in a reduced overall range . We analyzed the LCV distribution for specific cell types across different tissues. Our selection criteria included only those cell types found in more than three tissue types from our dataset, including “Endothelial cell (vascular),” “Fibroblast,” and “Adipocyte.” For each pharmacogene, we computed the range of LCV across different tissues of the same cell type. Our results yielded statistically significant differences in the distribution of LCV ranges between pharmacogenes and non-pharmacogenes ( , P -value < 0.05 for Endothelial cells and Adipocytes, T-tests). Notably, pharmacogenes displayed a lower range of LCV compared to non-pharmacogenes across various tissues. This pattern suggests a consistently high expression variation for pharmacogenes across varied tissue environments within specific cell types. Thus, compared to non-pharmacogenes, pharmacogenes tend to exhibit variation across different cells of the same cell type (see and B). Moreover, they consistently display such high variation across different cell types within the same tissue environment and across different tissues of the same cell type . illustrates how pharmacogenes distribute across tissue cell types based on their peak LCV (largest local coefficient of variation), with LCV values averaged across different individuals. Let N ij denote the count of pharmacogenes that exhibit their maximal LCV within a cell type i of tissue j . The average of N ij across all possible combinations of i and j is 14.78. Cell types within the skin (including epithelial cells, sebaceous cells, and unknown types) and the heart (including endothelial cells, immune, fibroblasts, and adipocytes) show N ij values exceeding the average of 14.78. Conversely, all cell types in tissues like the lung and prostate have N ij values below this average. This pattern highlights the diversity in LCV distribution across various cell types and tissues. The heatmaps in and C display the correlation between LCV values among different cell types and tissues (averaged across multiple individuals) using hierarchical clustering for pharmacogenes and non-pharmacogenes, respectively. demonstrates that, while most cell types exhibit a low positive correlation with each other, a notably high correlation is observed between heart and prostate myocytes. Additionally, myocytes and epithelial cells exhibit a stronger correlation with each other compared to other cell types. These findings provide valuable insights into the relationships between different cell types and tissues regarding the variability of pharmacogenes. Conversely, reveals that the correlations for non-pharmacogenes are predominantly weak, showing no significant trends. Previous research conducted by Simonovsky et al. highlighted a negative correlation between the variability of pharmacogenes across individuals (i.e. cross-individual variability) and drug efficacy using bulk RNA-seq data. In our study, we aimed to delve deeper into this correlation for cross-cell variability by focusing on cell-level LCV obtained from single-nucleus RNA sequencing (snRNA-seq). To investigate the relationship between pharmacogenes’ cross-cell variability and drug efficacy, we computed the weighted average LCV of each drug’s target genes (details in Materials and methods). Specifically, we extracted a drug’s target genes using the DGIdb database . For each pharmacogene gene, we calculated the LCV across all cell types of a tissue and for each donor individually, then averaged these donor-specific cell-type level LCVs. The maximum averaged LCV across different cell types of a particular tissue was chosen as its representative value for subsequent analyses. Next, we used the interaction score for each gene-drug pair as the weight to compute the weighted average LCV. Similarly, we calculated the cross-individual LCV of pharmacogenes based on GTEx tissue bulk RNA-seq data and obtained the weighted average for each gene-drug pair. Consequently, for every drug and every considered tissue, there exists a corresponding cross-cell pharmacogene LCV ( C k ) and a cross-individual pharmacogene LCV ( [12pt]{minimal} ${I}_k$ ). The analysis was conducted separately for each tissue. Our findings reveal a consistent negative correlation between drug efficacy and the LCV of pharmacogenes. As demonstrated in , six out of eight tissues exhibit a negative correlation between drug efficacy and cross-cell pharmacogene LCVs. Meanwhile, a negative correlation was observed for all considered seven tissues between drug efficacy and cross-individual LCVs . In essence, drugs targeting genes with higher LCV values tend to exhibit lower efficacy. The statistical significance of this negative correlation was confirmed for the cross-cell variability in the skeletal muscle and lung tissue (correlation = −0.149 or −0.256, P = 0.04 or 0.002, one-tailed Spearman’s tests) and for the cross-individual variability in the esophagus and heart tissues (correlation = −0.182 or −0.287, P = 0.0005 or 0.02, one-tailed Spearman’s tests). These results underscore the significance of gene variability in understanding drug efficacy at both the individual and population levels across various tissues. As drug efficacy is influenced by both cross-cell LCV and cross-individual LCV, we embarked on an exploration to assess whether the combination of these two variables could enhance our predictive capabilities. To accomplish this, we first formulated multiple linear regression models utilizing various combinations of LCV features (see Materials and methods for details). We selected linear regression for its straightforward interpretability of coefficients and simplicity. Given our relatively small sample size of drugs, linear regression is less likely to overfit compared to more complex models. When we exclusively utilized tissue-level cross-individual LCV features to predict drug efficacy (model 1), the resulting adjusted R-squared value was a mere 0.043. Alternatively, focusing solely on tissue-level cross-cell LCV features (model 2) yielded a somewhat improved adjusted R-squared of 0.074. However, it was when we jointly considered both cross-individual and cross-cell LCV features (model 3) that our model demonstrated substantial improvement, achieving an adjusted R-squared of 0.121. Notably, several predictors ( P < 0.05) emerged as significant contributors to this enhanced prediction, encompassing cross-individual LCV features in the esophagus ( P = 0.02) and heart ( P = 0.007) tissues, along with the cross-cell LCV feature in the lung ( P = 0.003). Furthermore, we explored an approach that integrates tissue cell-type-level LCVs ( [12pt]{minimal} ${T}_k$ , computed across cells belonging to the same cell type within a specific tissue) with cross-individual LCVs (model 4). Remarkably, this approach exhibited superior predictive power, resulting in an adjusted R-squared of 0.214. Among the significant predictors ( P < 0.05) were cross-cell LCV features for breast epithelial cells (luminal), prostate epithelial cells (Hillock), prostate fibroblasts, and lung epithelial cells (alveolar type II). None of the cross-individual LCV terms stood out. To further investigate the relationship between drug efficacy and the LCVs in target tissues, we focused on two distinct sets of drugs targeting the heart (including Amiodarone, Digoxin, Diltiazem, Disopyramide, Dofetilide, Dronedarone, Flecainide, Lidocaine, Propafenone, and Sotalol) and lung (including Aminophylline, Arformoterol, Montelukast, Pseudoephedrine, Salbutamol (Albuterol), Theophylline, Tiotropium, Zafirlukast, Zileuton, and Levofloxacin), respectively. For each drug set, we developed simple linear regression models using tissue-level cross-cell LCV ( [12pt]{minimal} ${C}_k$ ) and cross-individual LCV ( [12pt]{minimal} ${I}_k$ ) calculated from each tissue. For both drug sets, the models trained on the LCVs of the target tissues consistently ranked among the top three performers. Specifically, for heart-targeting drugs, the top three models performed best with features from the heart (adjusted R 2 = 0.379), breast (adjusted R 2 = 0.261), and skin (adjusted R 2 = 0.143). Similarly, for lung-targeting drugs, the top three models performed best with features from the breast (adjusted R 2 = 0.229), heart (adjusted R 2 = 0.211), and lung (adjusted R 2 = 0.117). To validate our findings, we repeated the same analysis by randomly selecting the same number of drugs as in the tissue-specific drug sets. We repeated this process 1000 times and calculated the mean adjusted R 2 . Notably, the results from the random selection demonstrated significantly lower performance, with a mean adjusted R 2 of 0.094 for models trained on heart features and 0.064 for models trained on lung features. Linear regression assumes a linear relationship between LCV features and drug efficacy. This assumption may oversimplify complex biological relationships, potentially leading to an incomplete representation of underlying patterns in the data. To address this, we subsequently employed a random forest machine learning model, utilizing the cell-type-level LCVs ( [12pt]{minimal} ${}_{}$ ) and cross-individual LCVs ( [12pt]{minimal} ${}_{}$ ) from model 4, to explore the potential nonlinear relationship between pharmacogene expression variability and drug efficacy. presents scatter plots that illustrate the relationships between drug relative efficacy and the top five features based on the highest node purity (i.e. how well a node separates samples of the same class from those of different classes). These include four cross-cell LCV features for heart endothelial cells (vascular), esophagus mucosa fibroblasts, lung epithelial cells (alveolar type I), lung epithelial cells (alveolar type II), and cross-individual LCV for the heart. A LOWESS line was incorporated to more accurately capture and illustrate the underlying negative trends in the data. Complementing this, presents an incMSE (‘increase in mean squared error’) that measures the improvement in prediction accuracy achieved by a feature) plot, highlighting the relative importance of the top three features: all are cross-cell LCV features for esophagus mucosa fibroblasts, lung epithelial cells (alveolar type I and II). Individual responses to drug treatments are intricately tied to the variability in gene expression, especially within pharmacogenes, which play crucial roles in drug responses. Our study utilized single-cell RNA sequencing (scRNA-seq) data to delve into the expression variability of pharmacogenes across various cell types in eight human tissues. scRNA-seq allows for the capture of expression patterns at the individual cell level, enabling the identification of cell-type-specific gene expression that bulk RNA-seq averages out. This detailed resolution is crucial for understanding the heterogeneity within tissues and the specific roles of different cell types in biological processes. Our findings not only confirm the well-established link between pharmacogene expression variability and drug efficacy but also offer insights into how the cellular-level variability can be leveraged for improved predictions. The discrepancy in LCV between the GTEx bulk RNA-seq data and scRNA-seq data is notable, with the former showing greater variability, indicated by a larger interquartile range ( vs. ). This difference can be attributed to several factors. Firstly, the GTEx bulk dataset includes samples from a diverse population of approximately 1000 donors, reflecting a broad spectrum of genetic backgrounds that likely contribute to increased gene expression variability. Secondly, bulk RNA-seq captures gene expression across all cell types within a tissue. Tissue heterogeneity can introduce additional variability. In contrast, scRNA-seq provides cell-type-specific data, enabling independent calculation of LCV for each cell type. Aggregating these cell-type-specific LCVs yields a more precise measurement of overall tissue LCV. We observed significant expression variability among pharmacogenes, both between different individuals and between different cells of the same individual. Pharmacogenes consistently exhibited higher variability compared to non-pharmacogenes, a trend that was evident across various tissues. This aligns with previous findings that pharmacogenes often display increased variability in expression, contributing to the observed diversity in drug responses among individuals. At the cross-individual level, genetic differences among patients can lead to varying expression levels of the same pharmacogene, resulting in different drug responses. For instance, patients with higher expression of certain key pharmacogenes may metabolize or react to drugs differently compared to those with lower expression levels. At the cross-cell level, our findings indicate that even within a single individual, different cell types can show significant variability in pharmacogene expression. This suggests that a drug’s effectiveness could be influenced by the specific cellular composition of the targeted tissue. In patients with different disease states, changes in cellular composition could further contribute to variability in drug responses. Additionally, we found that the variability in gene expression among different cell types is closely linked to their specialized functions. For instance, epithelial and endothelial cells exhibit high gene variability due to their pivotal roles in drug transport. Epithelial cells, lining organ surfaces, play critical roles in the absorption and excretion of various substances, including drugs . They must dynamically regulate gene expression to manage the influx, processing, and efflux of a wide array of compounds. Similarly, endothelial cells are central to the circulatory system’s transport functions, including nutrient and oxygen delivery, waste removal, and immune surveillance . Serving as the vital interface between the bloodstream and tissues, endothelial cells must adapt to diverse conditions and demands, requiring flexible gene expression patterns. To further validate our findings, we performed randomized tests for all comparisons between pharmacogenes and non-pharmacogenes. In each test, we randomly selected an equal number of non-pharmacogenes to compare with pharmacogenes. We then repeated this process 10 000 times to evaluate the significance of the average p-value. Notably, the results were similar to those obtained without the randomized tests. Detailed plots illustrating these findings can be found in – . Our analysis unveiled a negative correlation between the variability of pharmacogenes and drug efficacy, both at the cross-cell and cross-individual levels. Drugs targeting genes with higher expression variability tended to exhibit reduced efficacy, highlighting the importance of considering gene expression heterogeneity when designing and predicting drug responses. This correlation was observed across multiple tissues, emphasizing the broad impact of pharmacogene variability on drug outcomes. To enhance our understanding and predictive capabilities, we developed regression and machine learning models that integrated cross-cell and cross-individual pharmacogene expression variability. These models showed promising results, particularly when combining both types of variability. Notably, the joint consideration of cross-cell and cross-individual LCV features yielded a substantial improvement in predicting drug efficacy. This suggests that a comprehensive approach, encompassing variability at both the cellular and individual levels, can provide valuable insights into drug performance. Notably, our analysis identified the cross-cell LCV features, especially those in the lung, as dominant predictors. We trained linear regression models on tissue-specific drug sets to demonstrate that LCVs in the target tissue are predictive of drug efficacy. Models trained on features from the target tissue consistently ranked among the top three performers. However, it should be noted that they were not always the best models. This variability can be attributed to well-known off-target drug effects and the inherent bias introduced by our small sample size of drugs. Our findings suggest that incorporating single-cell gene expression variability into the early stages of drug development could enhance the design process. By identifying pharmacogenes with high variability across different cell types and tissues, researchers can pinpoint potential targets likely to produce variable patient responses. This approach could guide the development of drugs that account for such variability, leading to more consistent and effective treatments. Additionally, considering both cross-individual and cross-cell variability could improve predictions of drug efficacy and safety, ultimately supporting the creation of more personalized, context-specific therapies. In summary, our research underscores the complexity of gene expression variability in pharmacogenes and its profound impact on drug efficacy. By elucidating these variability patterns at both cellular and tissue levels, we move closer to the era of personalized medicine. Understanding how individual genetic differences manifest in drug responses allows for more tailored and effective treatment strategies. While our study provides valuable insights, several limitations should be acknowledged. The reliance on single-cell RNA sequencing data, while offering high resolution, may not capture the entirety of gene expression variability in complex tissues. Additionally, the dataset’s focus on healthy tissues limits the extrapolation to disease contexts where drug responses may differ. Future studies could explore how these variability patterns translate into clinical settings and consider a broader range of tissues and disease conditions. The scRNA-seq data and GTEx bulk RNA-seq data originate from different sample populations, which may introduce bias in comparisons. Specifically, the scRNA-seq data include only 16 donors. The LCV calculated from cells of this relatively small number of individuals may not generalize well to larger populations. Future studies should include a larger and more diverse set of donors to enhance the generalizability and robustness of the findings. In addition to the limitations of scRNA-seq data mentioned earlier, further insights could be gained by more detailed examination of tissue-specific factors and exploring cross-cell LCVs within specific tissues relevant to drug targeting. Considering additional covariates, such as the genetic diversity of pharmacogenes, may offer a clearer understanding of drug efficacy. Furthermore, the drug efficacy score in our analysis is derived from adverse event reports in the FDA Adverse Event Reporting System. However, the interpretation and reporting of adverse events can vary significantly among patients and healthcare providers, introducing variability that may affect the accuracy of the relative efficacy quantification. Future studies could mitigate this issue by adopting a more comprehensive approach to calculating drug efficacy, such as integrating multiple data sources to enhance the robustness and reliability of the measurements. In conclusion, our study contributes to the growing body of evidence supporting the importance of gene expression variability, particularly in pharmacogenes, for understanding and predicting drug responses. By integrating cross-cell and cross-individual variability measurements, we provide a framework for more precise drug efficacy predictions. This work lays the foundation for further investigations into the complicated relationships between gene expression, cellular heterogeneity, and drug outcomes, ultimately advancing the field of precision medicine. Gene expression data from normal human tissue samples Single-cell RNA-seq, more precisely single-nucleus RNA-seq (snRNA-Seq) data here were obtained from the Genotype-Tissue Expression (GTEx) V9 release ( https://gtexportal.org/home/ ). snRNA-seq uses isolated nuclei instead of whole cells to profile gene expression. The data were collected from non-disease samples of sixteen donors and eight tissues (skeletal muscle, breast, esophagus mucosa, esophagus muscularis, heart, lung, prostate, and skin). A total of 15 944 cells were investigated. Raw read counts were normalized by GTEx using CP10k (copy per 10 k transcripts). We filtered genes expressed in less than 50 cells and removed cells with less than 1650 genes. Because snRNA-Seq data contain a large number of zero values, we also removed genes with mean expression lower than the 10th quantile of the means. In addition, bulk RNA-seq data for seven of these tissues (esophagus in the bulk data corresponds to esophagus mucosa and esophagus muscularis in the snRNA-seq data) were downloaded from the GTEx Analysis V8 release. Raw read counts were normalized using TPM (transcripts per million) by GTEx. Similarly, samples and genes with low quality were filtered according to GTEx analysis procedures. Expression variability calculation The inflated zero expression values in snRNA-Seq data result in a biased measure of expression variability when applying the coefficient of variation (CV) directly. Therefore, we adopted the local coefficient of variation (LCV) algorithm to estimate the expression variability. This algorithm uses a ranking approach based on a sliding window, which has been validated as the least biased towards lowly expressed genes and the most robust to data incompleteness compared to other variability measures , including standard deviation (SD), mean absolute deviation (MAD), coefficient of variation (CV), dispersion measure (DM), and entropy variance (EV). Here, we used a 500-gene window. The LCV values range from 0 to 100. A larger LCV represents higher expression variability. Selection of pharmacogenes A list of 389 pharmacogenes, referred to as “PGRN pharmacogenes,” was obtained from Chhibber et al . . These genes were identified from various resources and publications related to drug responses, including PharmGKB , PharmaADME , and FDA Pharmacogenomics Biomarkers . We then compared the expression variability of pharmacogenes with that of the remaining non-pharmacogenes profiled in GTEx. To extend our list of pharmacogenes for the drug efficacy study, we incorporated 312 additional genes from the DGIdb database ( https://www.dgidb.org/ ). Such additional selection focused on genes interacting with more than two drugs. Drug-gene interaction score and drug relative efficacy Drug-gene interactions were downloaded from DGIdb . This database presents an interaction score between a drug d and a target gene g as: [12pt]{minimal} {IS}_{d,g}=\ \ \ \ \ \ \ }{\#\ \ \ \ \ \ d} \\ \ \ \ \ \ \ \ }{\#\ \ \ \ \ \ g} \ This drug-gene interaction score, treated at the logarithmic scale, serves as the weight for computing the overall LCV across all n target genes of the same drug d: [12pt]{minimal} ${LCV}_d=^n({IS}_{d,g})_g}{_{g=1}^n({IS}_{d,g})}$ . Note that [12pt]{minimal} ${LCV}_g$ can be cross-cell or cross-individual LCV values for gene g . Furthermore, the relative efficacy (RE) scores for drug-disease pairs were obtained from Guney et al. . The RE scores were computed using text-mining methods on reports submitted to the FDA’s Adverse Event Reporting System (FAERS, https://open.fda.gov/data/faers/ ) and comparing the number of ineffective reports with the number of reports stating the most common complaints. RE has a range from 0 to 1, and a higher RE score indicates that a drug is more effective in treating the disease. A total of 129 drugs were considered in our study. Computational models to predict drug relative efficacy To predict drug relative efficacy (RE), we devised multiple regression models leveraging different combinations of LCV values for their corresponding pharmacogenes. Cross-individual LCV Model: This model relies exclusively on tissue-level cross-individual LCV features. [12pt]{minimal} $RE={}_0+_{=1}^7{}_k{I}_k$ . Here, the combined LCV value of a drug for each tissue k ( [12pt]{minimal} ${I}_k$ ) was calculated as the weighted average of LCV values of all pharmacogenes associated with that drug according to the formula above. For each pharmacogene, the LCV was calculated across multiple individual samples of tissue k . Cross-cell LCV Model: This model exclusively employs the tissue-level cross-cell LCV features. [12pt]{minimal} $RE={}_0+_{=1}^8{}_k{}_{}$ . For each pharmacogene, the LCV was calculated across cells of the same cell type within tissue k and then averaged across individuals. To further obtain a tissue-level measurement, we employed three different methods to aggregate LCV values across different cell types within that tissue: maximum, mean, and median. The corresponding adjusted R 2 values obtained from these methods were 0.074, 0.050, and 0.051, respectively. As a result, we chose the maximum LCV among different cell types within tissue k ( [12pt]{minimal} ${C}_k$ ) for the drug efficacy prediction. Joint LCV Model: This model jointly considers both the cross-individual and cross-cell LCV features. [12pt]{minimal} $RE={}_0+_{=1}^7{}_k{I}_k+_{=1}^8{}_{7+j}{C}_j$ .Comprehensive joint LCV Model: This model integrates tissue-level cross-individual LCVs with cell-type level LCVs: [12pt]{minimal} $RE={}_0+_{=1}^7{}_k{I}_k+_{j=1}^{37}{}_{7+j}{T}_j$ . In this case, the LCV for each pharmacogene was calculated across cells of the same cell type within a tissue (a total of 37 cell-type and tissue combinations) and then averaged across individuals. The weighted average across all pharmacogenes for a drug is denoted as [12pt]{minimal} ${T}_j$ .The above four regression models provide a comprehensive framework for predicting drug relative efficacy by considering various combinations of LCV features, encompassing both individual and cell-level variability. Moreover, to capture the potential non-linear relationship between expression variability and drug efficacy, we applied a random forest model using the cell-type-level LCVs [12pt]{minimal} $({T_j}^{ }s)$ and cross-individual LCVs ( [12pt]{minimal} ${I_k}^{ }s$ ) identified in Model 4. We ranked the impact of various LCV features on drug efficacy based on node purity and “increase in Mean Squared Error” (incMSE). Node purity in random forest models refers to the homogeneity of the samples within each node of the decision trees comprising the forest. It measures how well a node separates samples of the same class from those of different classes. Higher purity indicates that the majority of samples within a node belong to the same class, resulting in clearer decision boundaries. “Increase in mean squared error” is a criterion used by random forest models to evaluate the effectiveness of splitting a node. It quantifies the reduction in overall variance that occurs when a node is split based on a particular feature. A larger increase in mean squared error suggests that splitting the node based on that feature results in greater improvement in prediction accuracy. Single-cell RNA-seq, more precisely single-nucleus RNA-seq (snRNA-Seq) data here were obtained from the Genotype-Tissue Expression (GTEx) V9 release ( https://gtexportal.org/home/ ). snRNA-seq uses isolated nuclei instead of whole cells to profile gene expression. The data were collected from non-disease samples of sixteen donors and eight tissues (skeletal muscle, breast, esophagus mucosa, esophagus muscularis, heart, lung, prostate, and skin). A total of 15 944 cells were investigated. Raw read counts were normalized by GTEx using CP10k (copy per 10 k transcripts). We filtered genes expressed in less than 50 cells and removed cells with less than 1650 genes. Because snRNA-Seq data contain a large number of zero values, we also removed genes with mean expression lower than the 10th quantile of the means. In addition, bulk RNA-seq data for seven of these tissues (esophagus in the bulk data corresponds to esophagus mucosa and esophagus muscularis in the snRNA-seq data) were downloaded from the GTEx Analysis V8 release. Raw read counts were normalized using TPM (transcripts per million) by GTEx. Similarly, samples and genes with low quality were filtered according to GTEx analysis procedures. The inflated zero expression values in snRNA-Seq data result in a biased measure of expression variability when applying the coefficient of variation (CV) directly. Therefore, we adopted the local coefficient of variation (LCV) algorithm to estimate the expression variability. This algorithm uses a ranking approach based on a sliding window, which has been validated as the least biased towards lowly expressed genes and the most robust to data incompleteness compared to other variability measures , including standard deviation (SD), mean absolute deviation (MAD), coefficient of variation (CV), dispersion measure (DM), and entropy variance (EV). Here, we used a 500-gene window. The LCV values range from 0 to 100. A larger LCV represents higher expression variability. A list of 389 pharmacogenes, referred to as “PGRN pharmacogenes,” was obtained from Chhibber et al . . These genes were identified from various resources and publications related to drug responses, including PharmGKB , PharmaADME , and FDA Pharmacogenomics Biomarkers . We then compared the expression variability of pharmacogenes with that of the remaining non-pharmacogenes profiled in GTEx. To extend our list of pharmacogenes for the drug efficacy study, we incorporated 312 additional genes from the DGIdb database ( https://www.dgidb.org/ ). Such additional selection focused on genes interacting with more than two drugs. Drug-gene interactions were downloaded from DGIdb . This database presents an interaction score between a drug d and a target gene g as: [12pt]{minimal} {IS}_{d,g}=\ \ \ \ \ \ \ }{\#\ \ \ \ \ \ d} \\ \ \ \ \ \ \ \ }{\#\ \ \ \ \ \ g} \ This drug-gene interaction score, treated at the logarithmic scale, serves as the weight for computing the overall LCV across all n target genes of the same drug d: [12pt]{minimal} ${LCV}_d=^n({IS}_{d,g})_g}{_{g=1}^n({IS}_{d,g})}$ . Note that [12pt]{minimal} ${LCV}_g$ can be cross-cell or cross-individual LCV values for gene g . Furthermore, the relative efficacy (RE) scores for drug-disease pairs were obtained from Guney et al. . The RE scores were computed using text-mining methods on reports submitted to the FDA’s Adverse Event Reporting System (FAERS, https://open.fda.gov/data/faers/ ) and comparing the number of ineffective reports with the number of reports stating the most common complaints. RE has a range from 0 to 1, and a higher RE score indicates that a drug is more effective in treating the disease. A total of 129 drugs were considered in our study. To predict drug relative efficacy (RE), we devised multiple regression models leveraging different combinations of LCV values for their corresponding pharmacogenes. Cross-individual LCV Model: This model relies exclusively on tissue-level cross-individual LCV features. [12pt]{minimal} $RE={}_0+_{=1}^7{}_k{I}_k$ . Here, the combined LCV value of a drug for each tissue k ( [12pt]{minimal} ${I}_k$ ) was calculated as the weighted average of LCV values of all pharmacogenes associated with that drug according to the formula above. For each pharmacogene, the LCV was calculated across multiple individual samples of tissue k . Cross-cell LCV Model: This model exclusively employs the tissue-level cross-cell LCV features. [12pt]{minimal} $RE={}_0+_{=1}^8{}_k{}_{}$ . For each pharmacogene, the LCV was calculated across cells of the same cell type within tissue k and then averaged across individuals. To further obtain a tissue-level measurement, we employed three different methods to aggregate LCV values across different cell types within that tissue: maximum, mean, and median. The corresponding adjusted R 2 values obtained from these methods were 0.074, 0.050, and 0.051, respectively. As a result, we chose the maximum LCV among different cell types within tissue k ( [12pt]{minimal} ${C}_k$ ) for the drug efficacy prediction. Joint LCV Model: This model jointly considers both the cross-individual and cross-cell LCV features. [12pt]{minimal} $RE={}_0+_{=1}^7{}_k{I}_k+_{=1}^8{}_{7+j}{C}_j$ .Comprehensive joint LCV Model: This model integrates tissue-level cross-individual LCVs with cell-type level LCVs: [12pt]{minimal} $RE={}_0+_{=1}^7{}_k{I}_k+_{j=1}^{37}{}_{7+j}{T}_j$ . In this case, the LCV for each pharmacogene was calculated across cells of the same cell type within a tissue (a total of 37 cell-type and tissue combinations) and then averaged across individuals. The weighted average across all pharmacogenes for a drug is denoted as [12pt]{minimal} ${T}_j$ .The above four regression models provide a comprehensive framework for predicting drug relative efficacy by considering various combinations of LCV features, encompassing both individual and cell-level variability. Moreover, to capture the potential non-linear relationship between expression variability and drug efficacy, we applied a random forest model using the cell-type-level LCVs [12pt]{minimal} $({T_j}^{ }s)$ and cross-individual LCVs ( [12pt]{minimal} ${I_k}^{ }s$ ) identified in Model 4. We ranked the impact of various LCV features on drug efficacy based on node purity and “increase in Mean Squared Error” (incMSE). Node purity in random forest models refers to the homogeneity of the samples within each node of the decision trees comprising the forest. It measures how well a node separates samples of the same class from those of different classes. Higher purity indicates that the majority of samples within a node belong to the same class, resulting in clearer decision boundaries. “Increase in mean squared error” is a criterion used by random forest models to evaluate the effectiveness of splitting a node. It quantifies the reduction in overall variance that occurs when a node is split based on a particular feature. A larger increase in mean squared error suggests that splitting the node based on that feature results in greater improvement in prediction accuracy. Supplementary_Figures_ddae138 |
Magnitude and Determinant Factors of Herbal Medicine Utilization Among Mothers Attending Their Antenatal Care at Public Health Institutions in Debre Berhan Town, Ethiopia | d42239a7-0378-41e9-ad58-4cd4ac025a22 | 9098925 | Pharmacology[mh] | Traditional medicine is defined as the ways of protecting and restoring health that existed before the arrival of modern medicine . It is underestimated part of healthcare that finds in almost every country in the world . Traditional medicine has been being used in the maintenance of health and the prevention, diagnosis, improvement, or treatment of physical and mental illness . According to the WHO, herbal medicine is defined as the practice of herbs, herbal materials, herbal preparations, and finished herbal products , and they are derived from plant parts such as leaves, stems, flowers, roots, and seeds . Globally in the previous decade, there has been revived need and interest in the use of traditional medicine . The WHO estimated that 80% of the global population used traditional and complementary medicine as primary healthcare . The utilization of traditional medicine has maintained its global popularity and it varies from country to country . In Asian countries, the consumption of traditional medicine ranges from 40% in China to 65% in India . Similarly in European countries utilization of traditional medicine accounts for 31% in Belgium, 49% in France, and 70% in Canada . Approximately 80% of the population in Africa used traditional medicine , and evidence indicated that in sub-Saharan African countries (SSA) the prevalence of traditional medicine utilization among pregnant mothers was between 25 and 65% . Even though there is insufficient data on the safety of herbal medicine utilization during pregnancy , local herbal products were being recommended by healthcare professionals in sub-Saharan African countries (SSA) for different health-related problems during pregnancy . Herbal medicines toxicity can be related to a lack of proper standardization, absence of quality control, and adulteration of herbal products with other pharmaceutical drugs and potentially toxic substances. Hence utilization of some unstudied herbal medicines with unknown pharmacologic activity can end up in adverse health outcomes for some vulnerable groups such as older adults, children, and pregnant women and their fetuses . According to the results of some kinds of literature, overutilization of herbal medicine during pregnancy is associated with different maternal and child adverse health outcomes such as preterm birth, cesarean birth, low birth weight, vaginal bleeding during pregnancy, maternal and neonatal morbidity and mortality, different congenital anomalies such as cleft lip, hypoplastic left heart syndrome, inguinal hernia, hydronephrosis, duplicate renal pelvis, fetal ductus arteriosus constriction, trisomy 18, and different form of maternal gastrointestinal complaints . Globally women are the primary utilizer of herbal medicine (HM), and even they consume different herbal medicine during the pregnancy period . The consumption of herbal medicine among pregnant and childbearing mothers ranges from 7 to 55% , and this difference depends on the consumer's geographic location, ethnicity, culture, traditions, and social status . Accordingly, utilization of herbal medicine among pregnant mothers was 34% in Australia , 50% in European Union , and 6–9% in the USA and Canada, respectively . Herbal products are believed a safe and natural alternative to conventional drugs among pregnant mothers and are used for the treatment of non-life treating conditions such as nausea and constipation . Globally, herbal medicine is available over the counter which makes them very accessible for utilization despite its health consequence when self-prescribed by pregnant women . Many studies had revealed that pregnant women used different types of herbal medicine and the most commonly used herbal medicines were ginger ( Zingiber officinale Roscoe), Chamomile ( Matricaria chamomilla L.), peppermint ( Mentha piperita L.), Echinacea ( Echinacea purpurea L.), cranberry ( Vaccinium oxycoccus L. and Vaccinium macrocarpum L.), garlic ( Allium sativum L.), raspberry ( Rubus idaeus L.), valerian ( Valeriana officinalis L.), fenugreek ( Trigonella foenum - graecum L.), fennel ( Foeniculum vulgare Mill.), herbal blends, and teas, namely, green and black teas [ Camellia sinensis (L.) Kuntze). Pregnant mothers use herbal medicine for mother or child-health-related problems and the most commonly reported indications for utilization of herbal medicines were nausea, vomiting, urinary tract infections (UTIs), preparation or facilitation of labor, cold, gastrointestinal problems, improvement of fetal outcomes and prevention of miscarriage, anxiety, health maintenance, and edema . Moreover, pregnant mothers consume herbal medicines due to their easy accessibility, assumed better efficacy compared to modern medicine, traditional/cultural belief, and low cost of herbal medicines compared to conventional medicine . Some evidence from Australia and Kenya showed that older and married pregnant mothers with low economic status, low educational level, and those who had nausea, and vomiting were the most utilizers of herbal medicine . Another literature has also found that herbal medicine use during pregnancy was determined by some factors such as higher maternal age, lower educational level of the spouse, poor pregnancy outcomes, previous herbal medicine utilization large family size, self-employment, unemployment, and rural residence in addition to previously mentioned factors . Nearly 80% of the Ethiopian population uses traditional medicine . The consumption of herbal medicines in Ethiopia is not only common but also culturally accepted and acknowledged . Evidence indicated that the practice of herbal medicine in Ethiopia ranges from 40.6% in Harar to 73.6% in Hosanna . The cultural acceptability of healers and local pharmacopeia, the relatively low cost of traditional medicine, and difficult access to modern health facilities were some of the reasons for herbal medicines utilization in Ethiopia . The majority of the pregnant mothers are unaware of the possible maternal and fetal complications of herbal medicine utilization , and those pregnant mothers and breastfeeding women are vulnerable to harmful effects of herbal medicines consumption since the appropriate dosages of herbal medicines and safety are not well established . The study of prevalence and determinants of herbal medicine utilization among pregnant mothers is a current public health concern in many developing countries including Ethiopia. In addition, even though some studies were conducted in Ethiopia, there is a scarcity of data on the magnitude and determinants of herbal medicine utilization among pregnant women. Therefore, this study aimed to assess the magnitude and determinant factors of herbal medicine utilization among mothers attending their antenatal care visit at public health institutions in Debre Berhan town Ethiopia.
Study Design and Study Period An institutional-based cross-sectional study was conducted from 12 February 2021 to 12 April 2021. Study Setting and Participants The study was conducted in Debre Berhan town, which is one of the 13 zones of the Amhara regional state. Debre Berhan town is located 130 km to the north of Addis Ababa city. It is found at an altitude of 2,850 m from sea level with a temperature ranging from 13 to 28°C. The town has nine kebele (seven kebele has an urban population while two of the kebeles have both urban and rural populations). Regarding the number of health institutions, the town has one comprehensive referral hospital, two private hospitals, three public health centers, nine health posts, and 18 private clinics. Pregnant women who came to attend antenatal care at public health institutions in Debre Berhan town during the study period were our study population. Inclusion and Exclusion Criteria Pregnant women who came for antenatal care visits at public health institutions in Debre Berhan town during the data collection period were included, while pregnant mothers who were seriously sick, who could not come to public health institutions and be unable to respond during the data collection time were excluded from the study. Sample Size Determination, Sampling Technique, and Procedure The sample size was determined using a single population proportion formula based on the assumption of 95% CI, 5% margin of error, and 48.6% prevalence of herbal medicine utilization . N = ( Z α / 2 ) 2 * P * ( 1 - P ) d 2 Where; n = the actual sample size Z = the standard normal deviation at 95% CI; =1.96 P = proportion of herbal medicine utilization d = margin of error that can be tolerated, 5% (0.05) n = ( 1 . 96 ) 2 * 0 . 486 * ( 1 - 0 . 486 ) ( 0 . 05 ) 2 = 383. By considering a 10% of non-response rate ( n = 39, the final sample size become ( N = 422) pregnant mothers. There are a total of four public health institutions in Debre Berhan town that provide focused antenatal care and we included all four public health institutions. The numbers of pregnant mothers who visited the public health institutions which were surveyed from each health institution were allocated proportionally based on the expected number of pregnant mothers who visited the public health institutions for the study period and the estimation was made depending on the number of pregnant mothers who visited each health institution for the last 2 months. The proportional allocation was calculated using the following formula: n j = n / N * N j Where: nj = Sample size of the j th health institution n = total sample size Nj = number of pregnant mothers who visited the j th health institution in the last 2 months. N = Total number of pregnant mothers who visited all public health institutions in the last 2 months. Lastly, study participants were selected systematically ( k = 5) based on the order of pregnant mothers who come to antenatal care rooms at health institutions until the required sample size was obtained K = 2235/422 = 5 th . Operational and Definition of Terms ➢ Herbal medicine use: refers to using the seeds, berries, roots, leaves, bark, or flowers of a plant for medicinal purposes. ➢ Herbal medicine utilization in current pregnancy: respondents were labeled as herbal medicine users if they have taken herbal medicine via any route of administration during the current pregnancy period. Routine meal preparations and nutrients such as food additives were excluded. ➢ Knowledge was measured using four items prepared to assess it. Study participants were asked the knowledge-related questions and value one was given for correct answers and value zero was given for those incorrect answers. Then respondent's score was dichotomized as sufficient knowledge or insufficient knowledge after the total score was computed by summing up all the items together. ➢ Sufficient in knowledge: Study participants who answered equal to or greater than the mean values of knowledge-related questions. ➢ Insufficient knowledge: Study participants who answered less than the mean values of knowledge-related questions. Methods of Data Collection Tool, Procedure, and Quality Control Data were collected by face-to-face interviews administered using a semi-structured questionnaire. Five-degree pharmacy and two adult nursing masters were recruited as data collectors and supervisors, respectively. The data collection tool was developed from different published literature and slight modification was made to the questions to make them in line with the objective of our study . The questionnaires were designed in the English language, translated into Amharic, and back to the English language for consistency of collected data. Twenty-six items were included in the final questionnaire divided into three sections. The first section covered data regarding sociodemographic and pregnancy-related information such as age, marital status, ethnicity, educational status of the mother, employment status, religion, monthly income, parity, presence or absence of ANC visiting history, presence or absence of health problems not related to gestation, trimester of pregnancy and distance from the health facility. The second section aimed at assessing the knowledge level of herbal medicine among pregnant mothers and it was assessed by a series of questions such as whether they have heard about herbal medicine or not, the types of herbal medicine they knew, information about the complication of herbal medicine utilization and types of complications of herbal medicine utilization they knew. The third section was used to collect data concerning the level of herbal medicine utilization among pregnant mothers, source of information regarding herbal medicine, presence or absence of discussion with their healthcare providers about herbal medicine utilization, and satisfaction level of pregnant mothers toward utilization of herbal medicine. The utilization of herbal medicine among pregnant mothers was assessed by different questions such as the utilization of herbal medicines during pregnancy, reason of use among herbal medicine utilizer, type of herbal medicine used, the purpose of herbal medicine utilization, trimester of herbal medicine utilization, source of information about herbal medicine use and any untoward effects faced during their utilization of herbal medicines. To maintain data quality, data collectors were given training for 2 days about the overall research objective including data collection procedures, tools, and how to fill data. In addition, the questionnaire was pretested in 10% of the sample size in Ataye hospital 3 weeks before the actual data collection period, and necessary amendments such as language clarity and appropriateness of the tools were done based on the findings of the pretest before the actual data collection time. Collected data were reviewed and checked for completeness and consistency by supervisors and the principal investigator daily. Methods of Data Entry and Analysis The collected data was cleaned, coded, and entered into Epidata version 3.1 and exported to statistical package for social science (SPSS) version 25 for analysis. Bivariable logistic regression was used to identify the determinant factors of herbal medicine utilization among pregnant mothers. Variables with a significant association in the bivariable analysis were entered into a multivariable binary logistic regression analysis to assess the determinant factors of herbal medicine utilization among pregnant mothers and P- values <0.2 and 0.05 were considered statistically significant for bivariable and multivariable binary logistic regression, respectively. The overall results were presented in texts, tables, and figures.
An institutional-based cross-sectional study was conducted from 12 February 2021 to 12 April 2021.
The study was conducted in Debre Berhan town, which is one of the 13 zones of the Amhara regional state. Debre Berhan town is located 130 km to the north of Addis Ababa city. It is found at an altitude of 2,850 m from sea level with a temperature ranging from 13 to 28°C. The town has nine kebele (seven kebele has an urban population while two of the kebeles have both urban and rural populations). Regarding the number of health institutions, the town has one comprehensive referral hospital, two private hospitals, three public health centers, nine health posts, and 18 private clinics. Pregnant women who came to attend antenatal care at public health institutions in Debre Berhan town during the study period were our study population.
Pregnant women who came for antenatal care visits at public health institutions in Debre Berhan town during the data collection period were included, while pregnant mothers who were seriously sick, who could not come to public health institutions and be unable to respond during the data collection time were excluded from the study.
The sample size was determined using a single population proportion formula based on the assumption of 95% CI, 5% margin of error, and 48.6% prevalence of herbal medicine utilization . N = ( Z α / 2 ) 2 * P * ( 1 - P ) d 2 Where; n = the actual sample size Z = the standard normal deviation at 95% CI; =1.96 P = proportion of herbal medicine utilization d = margin of error that can be tolerated, 5% (0.05) n = ( 1 . 96 ) 2 * 0 . 486 * ( 1 - 0 . 486 ) ( 0 . 05 ) 2 = 383. By considering a 10% of non-response rate ( n = 39, the final sample size become ( N = 422) pregnant mothers. There are a total of four public health institutions in Debre Berhan town that provide focused antenatal care and we included all four public health institutions. The numbers of pregnant mothers who visited the public health institutions which were surveyed from each health institution were allocated proportionally based on the expected number of pregnant mothers who visited the public health institutions for the study period and the estimation was made depending on the number of pregnant mothers who visited each health institution for the last 2 months. The proportional allocation was calculated using the following formula: n j = n / N * N j Where: nj = Sample size of the j th health institution n = total sample size Nj = number of pregnant mothers who visited the j th health institution in the last 2 months. N = Total number of pregnant mothers who visited all public health institutions in the last 2 months. Lastly, study participants were selected systematically ( k = 5) based on the order of pregnant mothers who come to antenatal care rooms at health institutions until the required sample size was obtained K = 2235/422 = 5 th .
➢ Herbal medicine use: refers to using the seeds, berries, roots, leaves, bark, or flowers of a plant for medicinal purposes. ➢ Herbal medicine utilization in current pregnancy: respondents were labeled as herbal medicine users if they have taken herbal medicine via any route of administration during the current pregnancy period. Routine meal preparations and nutrients such as food additives were excluded. ➢ Knowledge was measured using four items prepared to assess it. Study participants were asked the knowledge-related questions and value one was given for correct answers and value zero was given for those incorrect answers. Then respondent's score was dichotomized as sufficient knowledge or insufficient knowledge after the total score was computed by summing up all the items together. ➢ Sufficient in knowledge: Study participants who answered equal to or greater than the mean values of knowledge-related questions. ➢ Insufficient knowledge: Study participants who answered less than the mean values of knowledge-related questions.
Data were collected by face-to-face interviews administered using a semi-structured questionnaire. Five-degree pharmacy and two adult nursing masters were recruited as data collectors and supervisors, respectively. The data collection tool was developed from different published literature and slight modification was made to the questions to make them in line with the objective of our study . The questionnaires were designed in the English language, translated into Amharic, and back to the English language for consistency of collected data. Twenty-six items were included in the final questionnaire divided into three sections. The first section covered data regarding sociodemographic and pregnancy-related information such as age, marital status, ethnicity, educational status of the mother, employment status, religion, monthly income, parity, presence or absence of ANC visiting history, presence or absence of health problems not related to gestation, trimester of pregnancy and distance from the health facility. The second section aimed at assessing the knowledge level of herbal medicine among pregnant mothers and it was assessed by a series of questions such as whether they have heard about herbal medicine or not, the types of herbal medicine they knew, information about the complication of herbal medicine utilization and types of complications of herbal medicine utilization they knew. The third section was used to collect data concerning the level of herbal medicine utilization among pregnant mothers, source of information regarding herbal medicine, presence or absence of discussion with their healthcare providers about herbal medicine utilization, and satisfaction level of pregnant mothers toward utilization of herbal medicine. The utilization of herbal medicine among pregnant mothers was assessed by different questions such as the utilization of herbal medicines during pregnancy, reason of use among herbal medicine utilizer, type of herbal medicine used, the purpose of herbal medicine utilization, trimester of herbal medicine utilization, source of information about herbal medicine use and any untoward effects faced during their utilization of herbal medicines. To maintain data quality, data collectors were given training for 2 days about the overall research objective including data collection procedures, tools, and how to fill data. In addition, the questionnaire was pretested in 10% of the sample size in Ataye hospital 3 weeks before the actual data collection period, and necessary amendments such as language clarity and appropriateness of the tools were done based on the findings of the pretest before the actual data collection time. Collected data were reviewed and checked for completeness and consistency by supervisors and the principal investigator daily.
The collected data was cleaned, coded, and entered into Epidata version 3.1 and exported to statistical package for social science (SPSS) version 25 for analysis. Bivariable logistic regression was used to identify the determinant factors of herbal medicine utilization among pregnant mothers. Variables with a significant association in the bivariable analysis were entered into a multivariable binary logistic regression analysis to assess the determinant factors of herbal medicine utilization among pregnant mothers and P- values <0.2 and 0.05 were considered statistically significant for bivariable and multivariable binary logistic regression, respectively. The overall results were presented in texts, tables, and figures.
Sociodemographic Characteristics of Study Participants A total of 422 pregnant mothers were involved with a response rate of 100%. The mean age and average monthly family income of the study participants were 28 years old and 3,264 Ethiopian Birr (ETB), respectively. Almost all of them were Amhara and the majority of them were Orthodox religious followers . Knowledge and Practice of Pregnant Mothers Toward Herbal Medicine Utilization Of the total study participants, 420 (99.5%) respondents heard about herbal medicine from different sources and 150 (35.5%) knew about complications of herbal medicine utilization . Of the total study participants, more than half of them used an herbal medicine during their current pregnancy. Among study participants who used herbal medicine in the current pregnancy, 163 used it during the first trimester of pregnancy. The most common source of information about herbal medicine was families followed by media . Types and Indications of Herbal Medicine Used During the Current Pregnancy Of all respondents who stated that they had used an herbal medicine during their current pregnancy, the most commonly used herbal medicines were Ginger ( Zingiber officinale Roscoe), Damakesse ( Ocimum lamiifolium ) followed by Tenadam (Fringed rue) . The common indications for the utilization of herbal medicine during current pregnancy were common cold and headache . Factors Associated With Herbal Medicine Utilization Among Pregnant Mothers Bivariable and multivariable binary logistic regression were conducted to examine the determinant factors of herbal medicine utilization among pregnant mothers. In bivariable logistic regression variables such as educational level of pregnant mothers, average monthly income of the family, absence of ANC visit, presence of health problems not related to gestation, lack of discussion of herbal medicine utilization with healthcare professionals, knowledge level of pregnant mothers toward herbal medicine, and lack of awareness of complication of herbal medicine utilization were significantly associated with pregnant mothers herbal medicine utilization. But in multivariable binary logistic regression, only three variables (educational level, average monthly family income, and absence of awareness of complications of herbal medicine utilization) were significantly associated with the practice of herbal medicine among pregnant mothers. Thus, those pregnant mothers whose educational level was till primary school were 2 times more likely to consume herbal medicine during current pregnancy in comparison to study participants whose educational level was college and above [AOR: 2.21, 95% CI:1.17–4.18]. Those study participants who had a monthly family income of <2,800 Ethiopian Birr (ETB) were almost 2 times more likely to use herbal medicine during current pregnancy compared to those pregnant mothers who had a monthly family income of >4,200 Ethiopian Birr (ETB) [AOR: 1.72, 95% CI: 1.01–2.92]. Moreover, pregnant mothers who lacked awareness of complications of herbal medicine utilization were 10 times more likely to use herbal medicine during their pregnancy in comparison to studying participants who had awareness of complications of herbal medicine utilization [AOR: 10.3, 95% CI: 6.27–16.92] .
A total of 422 pregnant mothers were involved with a response rate of 100%. The mean age and average monthly family income of the study participants were 28 years old and 3,264 Ethiopian Birr (ETB), respectively. Almost all of them were Amhara and the majority of them were Orthodox religious followers .
Of the total study participants, 420 (99.5%) respondents heard about herbal medicine from different sources and 150 (35.5%) knew about complications of herbal medicine utilization . Of the total study participants, more than half of them used an herbal medicine during their current pregnancy. Among study participants who used herbal medicine in the current pregnancy, 163 used it during the first trimester of pregnancy. The most common source of information about herbal medicine was families followed by media . Types and Indications of Herbal Medicine Used During the Current Pregnancy Of all respondents who stated that they had used an herbal medicine during their current pregnancy, the most commonly used herbal medicines were Ginger ( Zingiber officinale Roscoe), Damakesse ( Ocimum lamiifolium ) followed by Tenadam (Fringed rue) . The common indications for the utilization of herbal medicine during current pregnancy were common cold and headache .
Of all respondents who stated that they had used an herbal medicine during their current pregnancy, the most commonly used herbal medicines were Ginger ( Zingiber officinale Roscoe), Damakesse ( Ocimum lamiifolium ) followed by Tenadam (Fringed rue) . The common indications for the utilization of herbal medicine during current pregnancy were common cold and headache .
Bivariable and multivariable binary logistic regression were conducted to examine the determinant factors of herbal medicine utilization among pregnant mothers. In bivariable logistic regression variables such as educational level of pregnant mothers, average monthly income of the family, absence of ANC visit, presence of health problems not related to gestation, lack of discussion of herbal medicine utilization with healthcare professionals, knowledge level of pregnant mothers toward herbal medicine, and lack of awareness of complication of herbal medicine utilization were significantly associated with pregnant mothers herbal medicine utilization. But in multivariable binary logistic regression, only three variables (educational level, average monthly family income, and absence of awareness of complications of herbal medicine utilization) were significantly associated with the practice of herbal medicine among pregnant mothers. Thus, those pregnant mothers whose educational level was till primary school were 2 times more likely to consume herbal medicine during current pregnancy in comparison to study participants whose educational level was college and above [AOR: 2.21, 95% CI:1.17–4.18]. Those study participants who had a monthly family income of <2,800 Ethiopian Birr (ETB) were almost 2 times more likely to use herbal medicine during current pregnancy compared to those pregnant mothers who had a monthly family income of >4,200 Ethiopian Birr (ETB) [AOR: 1.72, 95% CI: 1.01–2.92]. Moreover, pregnant mothers who lacked awareness of complications of herbal medicine utilization were 10 times more likely to use herbal medicine during their pregnancy in comparison to studying participants who had awareness of complications of herbal medicine utilization [AOR: 10.3, 95% CI: 6.27–16.92] .
This study aimed to assess the utilization of herbal medicine and its determinant factors among mothers attending their antenatal care visit at public health institution in Debre Berhan town, Ethiopia. This study found that the prevalence of utilization of herbal medicine during the current pregnancy was 65.6%. This finding is in line with the results of a study conducted in Zimbabwe (69.5%) , and lower than the result of a study conducted in Hosanna, Ethiopia (73.1%) . Moreover, this finding is higher than the finding of a study conducted in Nekemte hospitals, Ethiopia (50.4%) , public hospital of Harar, Ethiopia (40.6%) , University Gondar referral and teaching hospital, Ethiopia (48.6%) . This difference might be associated with the difference in sample size and study setting (some of them were conducted only in hospitals and other was community-based study). Besides this difference might be related to cultural differences across the regions of the country. Again this finding is higher than the results of the study conducted in Iran (48.4%) , Uganda (20%) , Tanzania (10.9%) , and Ghana (52.7%) . This difference might be related to cultural/belief variations across the countries, geographical differences, accessibility and affordability of herbal medicines, and methodological deference of the study such as study design, sample size, study setting, and included population. Our study indicated that the most commonly used herbal medicine during the current pregnancy was ginger (71.1%). This finding is consistent with the results of a study conducted in Alexandria Egypt , Nekemte hospital, Ethiopia , and the University of Gondar referral and teaching hospital, Ethiopia . The similarity of this finding with the finding of a study conducted in Ethiopia might be associated with socio-cultural similarity and easy accessibility of herbs (ginger) all over the regions of Ethiopia and the study population of our study was similar to the study conducted in Alexandria Egypt. But this finding is different from the finding of two studies conducted in Iran which indicated sour orange and Ammi as commonly used herbal medicine, respectively. This difference could be attributed to socio-cultural differences and differences in types of herbal availability across the countries. Our study reported that the most common indication for herbal medicine utilization during the current pregnancy was the common cold (53.30%). This finding is in line with the report of a study conducted in a public hospital in Harar, Ethiopia, and the University of Gondar referral and teaching hospital, Ethiopia, respectively . But the current finding is different from the results of a study conducted in Iran and Malaysia which indicated promotion of fetal health and facilitation of labor as the common indication for herbal medicine utilization, respectively. More than half (59%) of pregnant mothers used herbal medicine in the first trimester of pregnancy and this finding is in line with the results of a study conducted in Nekemte hospital, Ethiopia , University of Gondar referral and teaching hospital, Ethiopia , and Iran . This consistency could be because many minor complications of pregnancy take place at the early stage of pregnancy and pregnant mothers took those herbal medicines to alleviate those minor problems. But this finding is different from the study conducted in Iran and Malaysia , and both studies reported that the majority of study participants used herbal medicine in the third trimester of pregnancy. In our study, only (6%) of pregnant women disclosed utilization of herbal medicine during current pregnancy to their healthcare providers. This finding is in line with the result of a study conducted at the University of Gondar referral and teaching hospital, Ethiopia and northern Uganda . This similarity might be because of the fear of pregnant mothers that healthcare providers might disagree with the idea and practice of herbal medicine during pregnancy if they disclosed the information to their healthcare providers . Those pregnant mothers with a primary educational level were two times more likely to use herbal medicine during current pregnancy as compared to those who had an educational status of college and above. This finding is in line with the result of studies conducted in Hosanna, Ethiopia, and the University of Gondar referral and teaching hospital, Ethiopia . The possible explanation is that those educated pregnant mothers could have information about the efficacy of modern or conventional medicine over herbal medicine. Again those educated pregnant mothers might have better information about the bad consequences of herbal medicine utilization during pregnancy and tend to use less traditional medicine in comparison to their counterparts. Those pregnant mothers who had low average monthly income were almost 2 times more likely to use herbal medicine during their current pregnancy compared to their counterparts. This finding is in line with the results of studies conducted in the University of Gondar referral and teaching hospital, Ethiopia , Tanzania , and Ghana . Probably those pregnant mothers who were found in the lower socioeconomic class could have not afforded the cost of modern medicine and herbal medicine was more accessible and affordable for them to utilize when health-related problems happened. Those pregnant mothers who had no awareness of complications of herbal medicine utilization were 10 times more likely to use herbal medicine in comparison to pregnant mothers who had awareness of complications of herbal medicine utilization during their pregnancy and which is the new finding of this study. Probably lack of adequate information or knowledge on the complication of herbal medicine utilization exposed those pregnant mothers to utilize herbal medicine during their pregnancy. According to different pieces of evidence, herbal medicine utilization in some of the sub-Saharan African countries was associated with cultural and religion which is different from our study result which indicated the absence of a significant association between herbal medicine utilization and culture and religion of study participants. This might be associated with the difference in the ratio of traditional healers and healthcare professionals to the population in some sub-Saharan African countries. The ratio of traditional healers to the population in sub-Saharan Africa is 1:500, whereas the ratio of medical doctors to the population is 1:40,000 . Contrary to the above, there were large numbers of health extension workers in the community of Ethiopia which created adequate awareness of herbal medicine utilization during pregnancy.
The cause and effect relationship of the predictor variables with the level of pregnant mother herbal medicine utilization was not determined because of the cross-sectional nature of the study design. Besides, the study did not address the attitude of pregnant mothers toward the utilization of herbal medicine and the study did not assess the amount of herbal medicine the mother used.
The utilization of herbal medicine among pregnant mothers in this study was high. The most commonly used herbal medicines were ginger ( Zingiber officinale Roscoe), Damakesse ( Ocimum lamiifolium ), and Tenadam ( Fringed rue ). Common cold and headache were the common indications for utilization of herbal medicine during the current pregnancy period. Furthermore, educational level, average monthly family income, and absence of awareness of the complication of herbal medicine utilization were determinant factors of herbal medicine utilization among pregnant mothers. Governmental and non-governmental health institutions should promote traditional medicine practitioners to work together with modern medicine practitioners. Healthcare providers should openly discuss and create awareness about the benefit and complications of herbal medicine utilization during pregnancy giving special attention to those pregnant mothers who had a low educational level, low monthly family income and for those pregnant mothers who had no awareness of the complication of herbal medicine utilization during their antenatal counseling session as routine care. Again we recommend further research to be conducted by addressing the experience of herbal medicine use among users and providers through qualitative approaches.
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
Ethical clearance was obtained from the research and ethical review board of Debre Berhan University, institute of medicine and health science (Ref. No. IHRRCB-020/04/2021). The letter of permission from the ethical review board and midwifery department was submitted to all governmental health institutions in Debre Berhan town. Besides, a letter of permission which was obtained from all four governmental health institutions in Debre Berhan town was submitted to each health institution's maternal and child health unit department. Lastly, informed written consent was obtained from pregnant mothers before data collection.
GW and GF: conceptualization, formal analysis, writing-original draft preparation, writing-review and editing, and funding acquisition. GW: methodology, data curation, and visualization. GF: software and supervision. All authors contributed to the article and approved the submitted version.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
Risk of Late Implant Loss and Peri‐Implantitis Based on Dental Implant Surfaces and Abutment Types: A Nationwide Cohort Study in the Elderly | 99eab0ff-30da-4afe-8512-5ff93eafdcdd | 11651715 | Dentistry[mh] | Introduction Replacing missing teeth with implant‐supported restorations significantly reduces masticatory discomfort (J. H. Lee et al. ; S. Y. Lee et al. ). Beyond the immediate benefits of restored masticatory function, the long‐term success of implants depends on a variety of factors, making it a critical area of research. While osseointegration remains crucial, dental implant research has simultaneously expanded to explore implant failure and associated risk factors (Carra et al. ; Huang et al. ). Early implant failures, often linked to operational trauma or contamination, differ from late failures, which typically arise from biological or mechanical issues, including peri‐implantitis and excessive occlusal load (Derks et al. ). The likelihood of implant failure can be influenced by patient‐related factors such as smoking, dietary preferences, history of periodontitis, systemic health and bone quality, as well as operator‐related factors such as experience, implant geometry and restoration type (Dreyer et al. ; J. J. Kim et al. ; Roccuzzo et al. ; Serroni et al. ; Yoon et al. ). Notably, the intrinsic properties of implant restoration, particularly the implant surface and abutment types, are pivotal in determining implant success (J. H. Lee et al. ; S. Y. Lee et al. ). Various modifications of titanium implant surfaces have been developed to improve osseointegration (Albrektsson and Wennerberg ; Cochran et al. ). Sandblasting with large grit and acid etching (SA) is a common method for roughening titanium implants. This process involves altering the microstructure of the implant through sandblasting, followed by acid etching to create micro concavities (Barfeie, Wilson, and Rees ). Another approach involves hydroxyapatite (HA) coatings, which reflect the primary composition of the bone and are often used when fast osseointegration is essential (Dalton and Cook ; Oonishi et al. ; Sun et al. ; Xuereb, Camilleri, and Attard ; Yeo ; Yeung et al. ). Biocompatible and resorbable blasting media (RBM), including calcium phosphate particles, are also used and have been shown to enhance initial bone formation and accelerate the rate of bone integration around the implant (Citeau et al. ; Piattelli et al. ; Sanz et al. ). Implants with these surfaces can be mounted with abutments of various shapes and structures (Sailer et al. ). Pre‐made stock abutments can generally be categorized into one‐piece straight, two‐piece straight and two‐piece angled. Differences in stress distribution in the area surrounding the implant may exist between one‐piece and two‐piece abutments (Hajimiragha et al. ), whereas angled abutments may be associated with mechanical complications such as screw loosening (Pitman et al. ). Although rough and modified surfaces offer several advantages over smooth surfaces in dental implants (Jemt , ), studies comparing implant complications among various types of modified surfaces are scarce. Moreover, the abutment type could also affect clinical outcomes (Hajimiragha et al. ), warranting further research. Analysing large‐scale data can significantly help in understanding these associations. This nationwide population‐based cohort study aimed to assess the risk of late implant loss and peri‐implantitis based on implant surfaces and abutment types in the elderly population using a big data analysis approach.
Materials and Methods 2.1 Study Design and Data Source This observational cohort study analysed data obtained from the Korean National Health Insurance Service (NHIS) and National Health Screening databases. The Korean NHIS, a government‐mandated health insurance coverage, encompasses the entire population of Korea, which is approximately 50 million people. This database provides comprehensive demographic details, along with the medical records and expenses of the enrollees, including diagnoses, examinations, prescriptions and medical procedures. The diagnoses in this database were classified using the International Classification of Diseases, Tenth Revision (ICD‐10), Clinical Modification codes. Furthermore, data from the National Health Screening Database, which is linked to the NHIS database, were utilized. Enrollees of the NHIS aged 40 years or older are typically recommended to undergo biennial health screenings. These health examinations encompass anthropometric measurements, blood pressure assessments and laboratory tests. The study adhered to the principles of the Declaration of Helsinki and was exempt from review by the Institutional Review Board (approval number: ERI21016) owing to the use of a de‐identified and anonymized dataset. This study was conducted in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Reporting Guidelines (von Elm et al. ). 2.2 Study Population and Dental Implants The dental implant insurance coverage system by NHIS, initiated on 1 July 2014, currently targets patients aged 65 years and above. The lower age limit for eligibility for dental implant insurance coverage was 75 years in the first year of launch, reduced to 70 years in the second year and 65 years from the third year onwards. Insurance coverage requirements were (1) applicable only to partially edentulous arches; (2) limited to two implants per patient for life; and (3) restorative procedures restricted to cement‐retained prostheses using abutments and non‐precious metal‐ceramic crowns, with screw access channels in crowns allowed. Implants were excluded from coverage in the following cases: (1) completely edentulous arch; (2) zygomatic implants; (3) one‐piece implants with abutment portion; (4) crowns made of materials other than non‐precious metal‐ceramic crowns; and (5) other superstructures such as attachments. NHIS‐covered dental implants meeting the aforementioned criteria can be restored in any form, including single crowns, multi‐unit crowns or fixed partial dentures. For insurance claims on dental implants, implant and abutment products were claimed along with their respective service fees. Eligible implants and abutment products were specified by codes, with implants categorized by the surface type and abutments categorized by the structure. This study included data from individuals aged ≥ 65 who underwent NHIS‐covered implant procedures from July 2014 to December 2019. The implants and abutments used were based on the NHIS‐classified product codes. Implant surfaces included (1) RBM, (2) SA and (3) HA. Pre‐made stock abutment structures included (1) one‐piece straight (one‐piece group), (2) two‐piece straight (two‐piece group) and (3) two‐piece angled (angled group). Table provides a list of implant manufacturers and brands categorized by surface type, as registered under the reimbursement items in the NHIS. Implants were excluded if they (1) experienced early failure before restoration delivery, (2) were restored with other types of abutments such as customized computer‐aided‐design/manufacturing abutments, (3) were not completed with prosthetic restorations by the inclusion period (December 2019) or (4) were fitted in patients with a diagnosis of oral cancer from 2012 up to the time prior to the completion of the implant restoration. Records of the included implants were reviewed from the date of restoration delivery until the end of the study period (December 2020). 2.3 Study Outcomes The primary outcome of this study was ‘implant complication treatments’, which involved ‘implant removal procedures’ and ‘peri‐implantitis treatments’ during the follow‐up period. The incidence of these outcomes was identified through insurance billing prescriptions, and the appropriateness of each prescription was meticulously reviewed and managed by the Health Insurance Review and Assessment Service. If any inappropriate prescriptions were detected during the review, corrective actions were made, and the billed amounts were adjusted accordingly. Implant removal procedures were categorized as either simple or complex. ‘Simple implant removal’ was identified in cases in which the implant exhibited mobility and could be removed without the need for specialized instruments. Therefore, implants prescribed for simple removal were defined as having experienced ‘osseointegration loss’ (Mombelli and Lang ). ‘Complex implant removal’ involved implant and abutment fractures, nerve damage and peri‐implantitis without mobility, requiring the use of a trephine burr or a dedicated removal kit. Furthermore, implants that underwent either simple or complex removal were considered to have experienced ‘late implant loss’. ‘Peri‐implantitis treatments’ were defined as the occurrence of ‘peri‐implant osteoplasty’, ‘implant surface decontamination’, or both. Peri‐implant osteoplasty was indicated in cases with a probing depth of over 5 mm around the implant and marginal bone loss exceeding one‐third of the implant length. This procedure involves removing the infrabony defects while restoring the physiological contours of the alveolar bone, and may include both osteoplasty (i.e., reshaping or sculpting of the bone) and bone resection, during which the supporting bone is removed. Implant surface decontamination included surface cleaning, detoxification and threadoplasty of the implant surface. 2.4 Covariates This study utilized the NHIS database to gather extensive demographic and lifestyle data. Regarding income level, the bottom 25% of individuals and medical aid beneficiaries were classified as low income. Residence areas were categorized as metropolitan cities and other regions. Disability status was determined based on whether disability was recorded in the NHIS. Based on the ICD‐10 classification, this study identified several comorbidities. Type 2 diabetes mellitus was determined by two or more claims with the codes ‘E11’, ‘E12’, ‘E13’ or ‘E14’ as the main diagnosis. Hypertension was identified by two or more claims with the codes ‘I10’, ‘I11’, ‘I12’, ‘I13’ or ‘I15’. Dyslipidemia required two or more claims with code ‘E78’. Cancer was recognized with at least one claim using ‘V193’ and the codes ‘C00’–‘C99’. Stroke was determined by one or more claims with codes ‘I63’ or ‘I64’. Osteoporosis required two or more claims with the codes ‘M80’, ‘M81’ or ‘M82’. The types of teeth where the implants were placed were categorized by their location: maxilla, mandible, left side, right side, anterior and posterior. 2.5 Statistical Analysis SAS software (version 9.4; SAS Institute, Cary, NC) was used for statistical analysis. Baseline characteristics of the included implants were analysed using the Chi‐squared test. The incidence of implant complication treatments (implant removal procedures and/or peri‐implantitis treatments) was analysed according to the study population characteristics. To identify significant covariates, continuous variables were compared using the independent t ‐test and presented as mean ± standard deviation, whereas categorical variables were evaluated using the Chi‐squared test and presented as numbers (percentages). Kaplan–Meier cumulative curves were employed to illustrate the cumulative incidence probability of implant complication treatments for different implant surfaces and abutment types, with the log‐rank test applied for group comparisons. In cases where repeated prescriptions occurred for the same implant, the analysis was based on the date of the first prescribed code for the complication. The incidence rate of implant complication treatments was determined by dividing the total number of such events by the cumulative duration of follow‐up, presented as per 1000 implant‐years. The Cox proportional hazards regression model was used to assess the relationship between implant surfaces, abutments and the likelihood of implant complication treatments, and hazard ratios (HRs) and 95% confidence intervals (CIs) are reported. Clinically relevant covariates were adjusted in a hierarchical manner. Model 1 was an unadjusted model. Covariates were added in stages: first, sex and age (Model 2); then income, residence, disability status and systemic health conditions (diabetes, hypertension, dyslipidemia, cancer, stroke, and osteoporosis; Model 3); and finally, the type of tooth restored (maxillary, mandibular, anterior and posterior; Model 4). Interaction terms between implant surfaces and abutment types were tested. The association of the combined implant surface and abutment types with the risk of implant complication treatments was also investigated. Additionally, to focus on osseointegration loss, the association with the risk of simple implant removal was investigated. Statistical significance was set at p < 0.05.
Study Design and Data Source This observational cohort study analysed data obtained from the Korean National Health Insurance Service (NHIS) and National Health Screening databases. The Korean NHIS, a government‐mandated health insurance coverage, encompasses the entire population of Korea, which is approximately 50 million people. This database provides comprehensive demographic details, along with the medical records and expenses of the enrollees, including diagnoses, examinations, prescriptions and medical procedures. The diagnoses in this database were classified using the International Classification of Diseases, Tenth Revision (ICD‐10), Clinical Modification codes. Furthermore, data from the National Health Screening Database, which is linked to the NHIS database, were utilized. Enrollees of the NHIS aged 40 years or older are typically recommended to undergo biennial health screenings. These health examinations encompass anthropometric measurements, blood pressure assessments and laboratory tests. The study adhered to the principles of the Declaration of Helsinki and was exempt from review by the Institutional Review Board (approval number: ERI21016) owing to the use of a de‐identified and anonymized dataset. This study was conducted in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Reporting Guidelines (von Elm et al. ).
Study Population and Dental Implants The dental implant insurance coverage system by NHIS, initiated on 1 July 2014, currently targets patients aged 65 years and above. The lower age limit for eligibility for dental implant insurance coverage was 75 years in the first year of launch, reduced to 70 years in the second year and 65 years from the third year onwards. Insurance coverage requirements were (1) applicable only to partially edentulous arches; (2) limited to two implants per patient for life; and (3) restorative procedures restricted to cement‐retained prostheses using abutments and non‐precious metal‐ceramic crowns, with screw access channels in crowns allowed. Implants were excluded from coverage in the following cases: (1) completely edentulous arch; (2) zygomatic implants; (3) one‐piece implants with abutment portion; (4) crowns made of materials other than non‐precious metal‐ceramic crowns; and (5) other superstructures such as attachments. NHIS‐covered dental implants meeting the aforementioned criteria can be restored in any form, including single crowns, multi‐unit crowns or fixed partial dentures. For insurance claims on dental implants, implant and abutment products were claimed along with their respective service fees. Eligible implants and abutment products were specified by codes, with implants categorized by the surface type and abutments categorized by the structure. This study included data from individuals aged ≥ 65 who underwent NHIS‐covered implant procedures from July 2014 to December 2019. The implants and abutments used were based on the NHIS‐classified product codes. Implant surfaces included (1) RBM, (2) SA and (3) HA. Pre‐made stock abutment structures included (1) one‐piece straight (one‐piece group), (2) two‐piece straight (two‐piece group) and (3) two‐piece angled (angled group). Table provides a list of implant manufacturers and brands categorized by surface type, as registered under the reimbursement items in the NHIS. Implants were excluded if they (1) experienced early failure before restoration delivery, (2) were restored with other types of abutments such as customized computer‐aided‐design/manufacturing abutments, (3) were not completed with prosthetic restorations by the inclusion period (December 2019) or (4) were fitted in patients with a diagnosis of oral cancer from 2012 up to the time prior to the completion of the implant restoration. Records of the included implants were reviewed from the date of restoration delivery until the end of the study period (December 2020).
Study Outcomes The primary outcome of this study was ‘implant complication treatments’, which involved ‘implant removal procedures’ and ‘peri‐implantitis treatments’ during the follow‐up period. The incidence of these outcomes was identified through insurance billing prescriptions, and the appropriateness of each prescription was meticulously reviewed and managed by the Health Insurance Review and Assessment Service. If any inappropriate prescriptions were detected during the review, corrective actions were made, and the billed amounts were adjusted accordingly. Implant removal procedures were categorized as either simple or complex. ‘Simple implant removal’ was identified in cases in which the implant exhibited mobility and could be removed without the need for specialized instruments. Therefore, implants prescribed for simple removal were defined as having experienced ‘osseointegration loss’ (Mombelli and Lang ). ‘Complex implant removal’ involved implant and abutment fractures, nerve damage and peri‐implantitis without mobility, requiring the use of a trephine burr or a dedicated removal kit. Furthermore, implants that underwent either simple or complex removal were considered to have experienced ‘late implant loss’. ‘Peri‐implantitis treatments’ were defined as the occurrence of ‘peri‐implant osteoplasty’, ‘implant surface decontamination’, or both. Peri‐implant osteoplasty was indicated in cases with a probing depth of over 5 mm around the implant and marginal bone loss exceeding one‐third of the implant length. This procedure involves removing the infrabony defects while restoring the physiological contours of the alveolar bone, and may include both osteoplasty (i.e., reshaping or sculpting of the bone) and bone resection, during which the supporting bone is removed. Implant surface decontamination included surface cleaning, detoxification and threadoplasty of the implant surface.
Covariates This study utilized the NHIS database to gather extensive demographic and lifestyle data. Regarding income level, the bottom 25% of individuals and medical aid beneficiaries were classified as low income. Residence areas were categorized as metropolitan cities and other regions. Disability status was determined based on whether disability was recorded in the NHIS. Based on the ICD‐10 classification, this study identified several comorbidities. Type 2 diabetes mellitus was determined by two or more claims with the codes ‘E11’, ‘E12’, ‘E13’ or ‘E14’ as the main diagnosis. Hypertension was identified by two or more claims with the codes ‘I10’, ‘I11’, ‘I12’, ‘I13’ or ‘I15’. Dyslipidemia required two or more claims with code ‘E78’. Cancer was recognized with at least one claim using ‘V193’ and the codes ‘C00’–‘C99’. Stroke was determined by one or more claims with codes ‘I63’ or ‘I64’. Osteoporosis required two or more claims with the codes ‘M80’, ‘M81’ or ‘M82’. The types of teeth where the implants were placed were categorized by their location: maxilla, mandible, left side, right side, anterior and posterior.
Statistical Analysis SAS software (version 9.4; SAS Institute, Cary, NC) was used for statistical analysis. Baseline characteristics of the included implants were analysed using the Chi‐squared test. The incidence of implant complication treatments (implant removal procedures and/or peri‐implantitis treatments) was analysed according to the study population characteristics. To identify significant covariates, continuous variables were compared using the independent t ‐test and presented as mean ± standard deviation, whereas categorical variables were evaluated using the Chi‐squared test and presented as numbers (percentages). Kaplan–Meier cumulative curves were employed to illustrate the cumulative incidence probability of implant complication treatments for different implant surfaces and abutment types, with the log‐rank test applied for group comparisons. In cases where repeated prescriptions occurred for the same implant, the analysis was based on the date of the first prescribed code for the complication. The incidence rate of implant complication treatments was determined by dividing the total number of such events by the cumulative duration of follow‐up, presented as per 1000 implant‐years. The Cox proportional hazards regression model was used to assess the relationship between implant surfaces, abutments and the likelihood of implant complication treatments, and hazard ratios (HRs) and 95% confidence intervals (CIs) are reported. Clinically relevant covariates were adjusted in a hierarchical manner. Model 1 was an unadjusted model. Covariates were added in stages: first, sex and age (Model 2); then income, residence, disability status and systemic health conditions (diabetes, hypertension, dyslipidemia, cancer, stroke, and osteoporosis; Model 3); and finally, the type of tooth restored (maxillary, mandibular, anterior and posterior; Model 4). Interaction terms between implant surfaces and abutment types were tested. The association of the combined implant surface and abutment types with the risk of implant complication treatments was also investigated. Additionally, to focus on osseointegration loss, the association with the risk of simple implant removal was investigated. Statistical significance was set at p < 0.05.
Results Initial participants were those registered in the NHIS with an implant placement code between 2014 and 2019 and ≥ 65 years (implants, n = 3,038,622; patients, n = 1,757,851). Following the application of the exclusion criteria (implants, n = 683,916; patients, n = 371,281), the final study cohort comprised 2,354,706 implants of 1,386,570 patients (Figure ). The most prevalent implant surface was SA (86.7%), followed by RBM (9.52%) and HA (3.78%). Two‐piece abutments were used to restore 79.9% of the included implants. The distribution of the restored teeth by implant surface and abutment type is presented in Table . Additionally, the distribution of implant placement and restoration loading years according to implant surface and abutment type is provided in Table . At the patient level, the mean age of the study population was 72.11 ± 5.23 years. Out of the total 1,386,570 patients, 679,296 (48.99%) were men, and 613,165 (44.22%) resided in urban areas. Detailed patient‐level information is presented in Table . Out of a total of 2,354,706 implants, implant complication treatments occurred in 14,440 cases (0.61%) during a mean follow‐up period of 2.84 ± 1.25 years. The crude number (percentage) of events for implant removal procedures was 12,412 (0.53%), with 8770 (0.37%) being simple implant removals and 3642 (0.15%) being complex implant removals. Peri‐implantitis treatments occurred in 2137 cases (0.09%). The crude incidence of implant complication treatments based on study population characteristics is presented in Table . Complications were significantly more common in men, individuals ≥ 75 years and those residing in urban areas ( p < 0.0001). A higher incidence was noted in patients with disabilities and in those diagnosed with diabetes, hypertension or cancer ( p < 0.01). Compared with no‐complication implants, implants with complications comprised a higher proportion of maxillary implants ( p < 0.0001). Figure presents the cumulative incidence probability curves of implant complication treatments based on the implant surfaces and abutment types. The results of the log‐rank tests indicated significant differences in the incidence of implant complication treatments across the different surface groups ( p < 0.0001), with the SA surface group showing the lowest incidence of simple implant removal ( p < 0.0001). However, no significant differences were observed between the surfaces regarding the cumulative incidence probability of peri‐implantitis treatments ( p = 0.0587). Additionally, there were no significant differences in the incidence of implant complication treatments among the different abutment type groups ( p = 0.9811). The 6‐year cumulative incidence probability of implant complication treatments did not exceed 0.03 for all types of implant surfaces and abutments. Figure details the incidence of implant complication treatments based on implant surfaces and abutment types. Both the crude percentage of events in each group and the crude incidence rate per 1000 implant‐years are presented in the figure. After adjusting for potential confounding factors (Model 4), RBM surfaces had a 1.841‐fold higher risk of implant removal procedures compared with SA surfaces, with a particularly higher risk (2.365‐fold) of simple implant removal ( p < 0.0001). Similarly, HA surfaces showed a 1.424‐fold higher risk of implant removal procedures and a 1.484‐fold greater risk of simple implant removal compared with SA surfaces ( p < 0.0001). However, implants with an RBM surface showed a lower HR for complex implant removal than those with other surfaces ( p < 0.0001). HA surface showed 1.264 (1.009–1.584) of adjusted HR (95% CI) of peri‐implantitis treatments. However, there were no statistically significant differences in the incidence of peri‐implantitis treatments between implant surfaces ( p = 0.0587). As for the abutment types, the risk of implant removal procedures was not associated with the abutment type ( p = 0.653). However, upon detailed analysis, the one‐piece type showed a higher risk of simple implant removal but a lower risk of complex implant removal ( p < 0.0001). No significant differences in peri‐implantitis treatment were observed among the different abutment types ( p = 0.2054). The adjusted HR values of Models 2 and 3 are presented in Table . Figure illustrates the association between combined implant surface and abutment type groups and the risk of implant complication treatments in Model 4. The incidence rate of implant complication treatments was highest for implants with an RBM surface restored with a one‐piece straight abutment, reaching 3.884 per 1000 implant‐years. Furthermore, in the Model 4, the implants with an RBM surface and one‐piece abutment also showed the highest adjusted HR (95% CI) of 1.916 (1.734–2.117) compared to the implant group with an SA surface and a two‐piece abutment (1, reference). Implants with SA surfaces exhibited relatively low adjusted HR (95% CI) values of 0.916 (0.867–0.969) for the SA and one‐piece group, 1 (reference) for the SA and two‐piece group and 1.044 (0.969–1.125) for the SA and angled group ( p < 0.0001; Figure ). Moreover, the adjusted HR values for simple implant removal indicated that the SA surface implant group with a two‐piece abutment type had the lowest risk ( p < 0.0001; Figure ). In terms of both implant complication treatments and simple implant removal, the RBM group with all abutment types and the HA group with straight abutments showed significantly higher risks than those in the SA group with a two‐piece abutment ( p < 0.0001).
Discussion This nationwide, population‐based cohort study revealed significant differences in the incidence of late implant loss (implant removal procedures) among the implant surface groups, although no significant difference was found in the incidence of peri‐implantitis treatments. Different types of abutments did not show a significant association with the risk of complications. Notably, all types of implant surfaces and abutments showed a low incidence of complication treatments during follow‐up periods of up to 6 years. The highest incidence rate was 3.884 per 1000 implant‐years, observed in the RBM surface implant group restored with a one‐piece abutment. SA implants showed the lowest incidence of late implant loss among the surfaces studied. Although many studies have compared smooth and rough surfaces and reported the individual outcomes of several rough surfaces (Jemt , , ; Raabe et al. ; van Velzen et al. ), studies comparing survival rates across various types of rough implant surfaces are scarce (H. C. Kim et al. ). This study leveraged national‐level big data, providing unique opportunities for such comparisons. HR analysis revealed that RBM and HA surfaces had a higher risk of late implant loss (implant removal procedures) and osseointegration loss (simple implant removal) compared to SA surfaces. Although SA surfaces that underwent acid etching showed relatively superior results (van Velzen et al. ), RBM and HA are also noted for their biocompatibility and ability to facilitate in vivo osseointegration (Novaes et al. ; Sanz et al. ). In the present study, across all implant surface groups, less than 1% (crude percentage) of late implant loss was observed during the follow‐up period of up to 6 years (Figure ), indicating successful performance. The ‘simple implant removal’ code, used exclusively when an implant exhibited mobility and was able to be removed without instruments such as a trephine burr or removal kit, may serve as an indicator of osseointegration loss (Mombelli and Lang ). Because osseointegration occurs at the implant surface, the code's prescription can be directly linked to the type of implant surface. This relationship may underscore the importance of the implant surface type in its stability and osseointegration maintenance (J. C. Kim, Lee, and Yeo ). In contrast, ‘complex implant removal’ may become necesssary due to various causes, including biological complications such as peri‐implantitis and nerve damage or mechanical complications such as abutment screw fracture, abutment fracture, implant fracture or wear of the internal threads of the implant (Tafuri et al. ). This makes the cause or situation of complex implant removals difficult to ascertain, potentially making them less directly related to the implant surface or abutment type. When comparing the outcomes associated with peri‐implantitis, HA surfaces displayed a slightly higher HR (1.264, CI: 1.009–1.584) for ‘peri‐implantitis treatments’ than SA surfaces (1, reference), indicating the possibility of a greater prevalence of significant peri‐implantitis‐related complications on HA surfaces. HA coatings on titanium implants improve bone attachment and prevent the release of metal ions into the bone (Dalton and Cook ; Oonishi et al. ; Yeo ). However, HA coatings have been associated with delamination from the titanium implant surface (Sun et al. ), which can impede bone healing and induce inflammation in the peri‐implant tissue (Sun et al. ; Yeung et al. ). Although the HR for HA was slightly higher than that for SA in this study, the difference was not significant ( p = 0.0587). Furthermore, as shown in Figure , the difference in cumulative incidence probability was very small, suggesting that the difference in the incidence of peri‐implantitis treatments among the three surfaces are minimal. Regarding the abutment type, there was no significant difference in late implant loss or peri‐implantitis among the groups. When analysing late implant loss as ‘simple’ and ‘complex implant removals’ separately, one‐piece abutments showed slightly higher osseointegration loss compared with two‐piece types. This may be attributed to the variations in stress distribution around the implant, which have been reported to differ between one‐piece and multi‐piece abutments (Hajimiragha et al. ). Conversely, instances of ‘complex implant removal’ were more frequent with two‐piece abutments, possibly due to mechanical complications such as fractures in the relatively smaller screws or the abutment walls (Sailer et al. ). Nonetheless, when considering both ‘simple’ and ‘complex implant removals’ together, the type of abutment did not significantly affect the overall implant survival rates. Additionally, the incidence of peri‐implantitis treatments showed no association with abutment type. When evaluating the association between combined groups and the risk of implant complication treatments, the most commonly used combination—SA surface implants (86.7%) restored with a two‐piece abutment (79.91%), was used as the reference for calculating HR values (Figure ). Implants with an RBM surface restored with a one‐piece abutment were associated with a 1.916‐fold higher risk of complications compared to SA surface implants with a two‐piece abutment, even after adjusting for potential confounding factors. However, the crude incidence rate for implants with an RBM surface and a one‐piece abutment remained very low, at 3.884 per 1000 implant‐years, despite the higher risk association. This study had certain limitations. First, factors such as bone height, quality, grafting, implant length, diameter, design and restoration splinting were unavailable in the NHIS data. Second, although evaluating implant surfaces, variations in the materials and techniques used by different companies for surface preparation were not accounted for, as specific brands or companies were not disclosed by the NHIS. Third, the history of smoking and periodontitis was not considered. Specifically, a history of periodontitis has been reported to be associated with the incidence of peri‐implantitis and implant loss (Roccuzzo et al. ; Serroni et al. ). Therefore, considering periodontitis history would have been helpful in isolating the independent association between implant complications and implant surfaces and abutment types. However, the diagnosis and prescriptions for periodontitis included in the NHIS data were mostly made by specialists in other fields or general practitioners rather than periodontists. Given the complexity of diagnosing periodontal disease, the appropriateness of these diagnoses is difficult to ensure. Therefore, this study focused on clear interventions such as ‘implant removal procedures’ and excluded diagnostic information related to periodontal disease. Future research focusing on the relationship between a history of periodontitis and implant complications, along with detailed nationwide big data analysis, is required. Despite the limitations, a major strength of this study lies in its large, nationwide, population‐based sample size, including over 2 million implants with follow‐up data of up to 6 years. In addition, unlike most clinical studies conducted in academic settings with selective patient criteria, this study mirrors real‐world scenarios by including a broad range of patients and clinicians, encompassing both specialists and general dentists. Its primary criterion for late implant loss, based on the straightforward metric of ‘implant removal procedures’, reduces interpretation bias among practitioners. A further strength of this study lies in the unique characteristics of the NHIS database. The NHIS in South Korea operates under a unique system in which all healthcare institutions are legally required to be designated as NHIS providers. This mandates institutions to treat NHIS‐enrolled patients and receive reimbursement fees determined by the government. The reimbursement claims are meticulously reviewed by the Health Insurance Review and Assessment Service. Additionally, NHIS enrollment is mandatory for all citizens. Strict government regulation prevents clinicians from arbitrarily prescribing non‐reimbursable treatments. Consequently, the NHIS database used in this study is expected to have minimal missing data, such as unauthorised non‐reimbursement prescriptions.
Conclusions Based on the analysis of follow‐up data from over 2 million implants for up to 6 years, this study found that the implant surface type was associated with the risk of late implant loss, whereas no significant association was found between the abutment type and the risk of late implant loss. Implants with RBM and HA surfaces had a 1.8‐ and 1.4‐fold higher risk of late implant loss, respectively, compared to SA surfaces. However, despite the slightly superior results of the SA group, both RBM and HA groups also showed low incidences of implant removal procedures, indicating that all three surface groups are likely to function successfully. No significant differences were observed in the incidence of peri‐implantitis treatments among the different implant surface and abutment type groups. Further research with extended follow‐up periods is needed.
Su Young Lee: conception, design, data acquisition and interpretation, drafting the manuscript. René Daher: conception, design, data interpretation. Jin‐Hyung Jung: design, data acquisition and analysis, all statistical analyses. Kyungdo Han: design, data acquisition and analysis, all statistical analyses. Irena Sailer: conception, design, data interpretation. Jae‐Hyun Lee: conception, design, data acquisition and interpretation, drafting the manuscript. All authors contributed to critically revising the manuscript, gave their final approval and agree to be accountable for all aspects of the work.
This study, which analysed nationwide big data, was confirmed to be exempt from review by the Institutional Review Board (approval number: ERI21016).
The authors declare no conflicts of interest.
Appendix S1. Supporting information.
|
The outcomes of team‐based learning vs small group interactive learning in the obstetrics and gynecology course for undergraduate students | 385ce1ee-b6d6-4241-a77b-e7dd29ff9aa5 | 11103139 | Gynaecology[mh] | INTRODUCTION Team‐based learning (TBL) was introduced into medical education in 2001 and was adapted from a business school environment. TBL promotes teamwork, communication skills and the efficient use of faculty resources. In comparison with traditional learning methods, TBL is learner‐centered and focuses on group interactions, group work and knowledge application, using effective pedagogical principles such as pre‐class preparation, active learning, peer learning and instant feedback. , Although TBL is well‐established as an active teaching method, with some documented benefits for students, such as enhanced student engagement, and knowledge acquisition in a variety of disciplines including pharmacy, engineering, business, nursing, and preclinical medical disciplines, , , , , it has not been as widely adopted in clinical disciplines in undergraduate medical education. However, the context of learning in clinical disciplines is complex and findings from non‐medical and preclinical disciplines may not be directly applicable. One of the challenges of using TBL in clinical disciplines is the short‐term clerkships, which do not allow teams to work together for a longer time and mature. In contrast with the preclinical disciplines where the main method of teaching has been cathedral lectures, the clinical disciplines traditionally use a variety of methods such as bedside teaching, apprenticeship, simulations, small group interactive seminars, problem‐based seminars and case studies. Thus, it is challenging to draw firm conclusions about the benefits of implementing TBL in clinical disciplines due to the range of other teaching methodologies used simultaneously. A scoping review of published literature by our research group showed that most of the studies (90%) on implementing TBL in clinical disciplines adopted a modified version where one or more steps of TBL were missing. Furthermore, the methodological quality of the studies varied substantially, making it difficult to synthesize evidence and draw reliable conclusions. Most of the previously published studies use traditional lectures as a comparator, with only a few comparing TBL with seminars in clinical disciplines. , , , Therefore, the primary aim of the current study was to compare TBL with traditional small group interactive learning (SIL) in a prospective cross‐over trial with randomized allocation of seminars to student groups to investigate knowledge acquisition and retention by undergraduate medical students during the obstetrics and gynecology clerkship. We also investigated student engagement and satisfaction with the learning process. MATERIAL AND METHODS 2.1 Study setting and population The study was conducted at Karolinska Institutet, a medical university in Stockholm, Sweden. All students attending the obstetrics and gynecology clerkships during the Autumn semester of 2022 were invited to participate. The obstetrics and gynecology (Ob/Gyn) clerkships are 6 weeks long in the 5th year curriculum. Students attend clinical clerkship in two different batches consecutively during one semester. All students attending the course are divided by the administrative staff of the university into four groups consisting of approximately 40 students, and each group is assigned to one of the four large teaching hospitals in Stockholm affiliated to Karolinska Institutet (ie Karolinska University Hospital Huddinge, Karolinska University Hospital Solna, Stockholm South Hospital and Danderyd Hospital). During this clerkship a combination of teaching methods are used, eg lectures, seminars and clinical rotation, where students participate alongside obstetrician gynecologists in their everyday clinical work. Most of our participants had only had one previous TBL session and were not very familiar with TBL, since this model of active learning was adopted by the Karolinska Institutet recently in 2021 with its initial implementation starting in preclinical disciplines followed by gradual introduction into clinical disciplines. 2.2 Design of the study We performed a prospective crossover study to compare the TBL seminars with SIL seminars. Two seminars – “Bleeding during pregnancy” and “Abnormal uterine bleeding” – were chosen from the curriculum to be delivered as TBL and traditional SIL. As students attended clinical clerkship in two consecutive groups during the same semester, the seminars were randomly allocated to the groups using a simple randomization procedure of drawing sealed opaque envelopes. The first group were allocated to the seminar on “Bleeding during pregnancy” in TBL format and the seminar on “Abnormal uterine bleeding” in SIL format, whereas the second group was allocated to the same seminars in the opposite format. The student:teacher ratio was approximately 10:1 in the traditional SIL seminars and 20:1 in the TBL seminars. 2.3 Team‐based learning seminars (intervention) For the TBL sessions we used InteDashboard R Inc., Singapore, an all‐in‐one TBL electronic platform for digital individual readiness assurance test (iRAT), team readiness assurance test (tRAT) and application exercises. The students were informed about the process of creating the teams and randomly assigned to teams using a computer‐generated sequence, a feature available in the InteDashboard. Each team had the recommended ideal group size of five to seven students and each TBL session had 17 to 22 students. Each TBL seminar started with a short introduction to the TBL concept and learning objectives for the seminar. All TBL seminars were led by the same instructor (IS) who was trained in teaching TBL and has a Team‐Based Learning Collaborative certification and several years of general teaching experience. The TBL sessions consisted of four steps in accordance with the classic TBL approach. The structure of each step and time slots are summarized in Table . The first step was the pre‐class preparation phase where the students had to read certain predefined materials in their recommended textbooks and watch video lectures covering the two subject areas: bleeding during pregnancy and abnormal uterine bleeding. The second step was the readiness assurance process, which was accomplished by using iRAT and tRAT. Both iRAT and tRAT were closed‐book assessments. The iRAT was taken by each student individually. The tRAT was completed by the teams after discussing the questions and their responses among the team members to arrive at a consensus. Immediate feedback was provided from Intedashboard, which displayed whether the correct answer had been chosen. An inter‐team discussion followed the tRAT and all questions were discussed thoroughly. The discussion was led by the facilitator (IS). The teams could also appeal and ask questions during this part of the discussion if they did not agree with the answers provided. The third TBL step was the application exercises. To create them we adhered to the “4S” principle: (1) Significant problem, (2) Same problem, (3) Specific choice and (4) Simultaneous reporting. The application exercises were realistic clinical scenarios posing a significant problem . All groups then had 25 minutes to discuss the same problem and to write down their specific choice of answer. The answers were reported in Intedashboard simultaneously for the facilitator who moderated the discussion, clarified concepts, and discussed all questions with the groups. The fourth TBL step, the peer‐evaluation, was performed at the end of the TBL session on paper sheets. The students rated their team members' contribution to the discussion by distributing a total of 100 points to their team members according to Fink's (“Divide up the Money”) method. The students were not forced to assign different point values to their team members. They could also provide written feedback. The results of the sub‐components of the TBL were not taken into account in the students’ final grade. 2.4 Traditional small group interactive learning seminars (control) The traditional SIL seminars in the obstetrics and gynecology clerkship were 3 hours long and based on clinical scenarios. In each seminar, approximately 10 medical students (in three of the four hospitals) and 20 (in the fourth hospital) participated. As in TBL, the students had a preparation phase where they had to prepare four to five predefined clinical scenarios regarding history taking, clinical exam and investigations, differential diagnosis and treatment. The cases were then discussed between students and with the facilitator of the seminar. 2.5 Outcomes and the measurement tools The primary outcome was knowledge acquisition and retention assessed through final examination scores. The final examination for the course was a theoretical test which combined single best answer questions (10 items) with short answer questions (11 items) and had a maximum score of 52.5 points. In the final exam, there were questions related to both types of seminars (7.5 points for the Bleeding during pregnancy seminar and 14 points for the Abnormal uterine bleeding seminar). The secondary outcomes were student satisfaction and engagement. For all teaching sessions the students completed a self‐reported 15‐item questionnaire on satisfaction and engagement (Appendix ). “A Scoring Guide for the Student Self‐report of Engagement Measure”, which is a validated tool, was used to measure engagement. Student satisfaction with the specific two seminars was assessed using Student Satisfaction Subscale – part of the validated tool. All the questions were answered anonymously using a five‐point Likert scale (1 = strongly disagree, 5 = strongly agree). A subanalysis of the iRAT and tRAT results was performed to better understand the students’ learning process in TBL sessions. 2.6 Statistical analyses Frequencies and proportions were used for the description of sample characteristics. For continuous numerical variables, mean and standard deviations (SD) or median and quartiles were calculated. Mann–Whitney U ‐test was used to compare differences between the outcomes of TBL and SIL. A two‐sided P ‐value <0.05 was considered significant. All analyses were performed using IBM SPSS Statistics software version 24.0 (IBM Corp. Armonk, NY, USA). Study setting and population The study was conducted at Karolinska Institutet, a medical university in Stockholm, Sweden. All students attending the obstetrics and gynecology clerkships during the Autumn semester of 2022 were invited to participate. The obstetrics and gynecology (Ob/Gyn) clerkships are 6 weeks long in the 5th year curriculum. Students attend clinical clerkship in two different batches consecutively during one semester. All students attending the course are divided by the administrative staff of the university into four groups consisting of approximately 40 students, and each group is assigned to one of the four large teaching hospitals in Stockholm affiliated to Karolinska Institutet (ie Karolinska University Hospital Huddinge, Karolinska University Hospital Solna, Stockholm South Hospital and Danderyd Hospital). During this clerkship a combination of teaching methods are used, eg lectures, seminars and clinical rotation, where students participate alongside obstetrician gynecologists in their everyday clinical work. Most of our participants had only had one previous TBL session and were not very familiar with TBL, since this model of active learning was adopted by the Karolinska Institutet recently in 2021 with its initial implementation starting in preclinical disciplines followed by gradual introduction into clinical disciplines. Design of the study We performed a prospective crossover study to compare the TBL seminars with SIL seminars. Two seminars – “Bleeding during pregnancy” and “Abnormal uterine bleeding” – were chosen from the curriculum to be delivered as TBL and traditional SIL. As students attended clinical clerkship in two consecutive groups during the same semester, the seminars were randomly allocated to the groups using a simple randomization procedure of drawing sealed opaque envelopes. The first group were allocated to the seminar on “Bleeding during pregnancy” in TBL format and the seminar on “Abnormal uterine bleeding” in SIL format, whereas the second group was allocated to the same seminars in the opposite format. The student:teacher ratio was approximately 10:1 in the traditional SIL seminars and 20:1 in the TBL seminars. Team‐based learning seminars (intervention) For the TBL sessions we used InteDashboard R Inc., Singapore, an all‐in‐one TBL electronic platform for digital individual readiness assurance test (iRAT), team readiness assurance test (tRAT) and application exercises. The students were informed about the process of creating the teams and randomly assigned to teams using a computer‐generated sequence, a feature available in the InteDashboard. Each team had the recommended ideal group size of five to seven students and each TBL session had 17 to 22 students. Each TBL seminar started with a short introduction to the TBL concept and learning objectives for the seminar. All TBL seminars were led by the same instructor (IS) who was trained in teaching TBL and has a Team‐Based Learning Collaborative certification and several years of general teaching experience. The TBL sessions consisted of four steps in accordance with the classic TBL approach. The structure of each step and time slots are summarized in Table . The first step was the pre‐class preparation phase where the students had to read certain predefined materials in their recommended textbooks and watch video lectures covering the two subject areas: bleeding during pregnancy and abnormal uterine bleeding. The second step was the readiness assurance process, which was accomplished by using iRAT and tRAT. Both iRAT and tRAT were closed‐book assessments. The iRAT was taken by each student individually. The tRAT was completed by the teams after discussing the questions and their responses among the team members to arrive at a consensus. Immediate feedback was provided from Intedashboard, which displayed whether the correct answer had been chosen. An inter‐team discussion followed the tRAT and all questions were discussed thoroughly. The discussion was led by the facilitator (IS). The teams could also appeal and ask questions during this part of the discussion if they did not agree with the answers provided. The third TBL step was the application exercises. To create them we adhered to the “4S” principle: (1) Significant problem, (2) Same problem, (3) Specific choice and (4) Simultaneous reporting. The application exercises were realistic clinical scenarios posing a significant problem . All groups then had 25 minutes to discuss the same problem and to write down their specific choice of answer. The answers were reported in Intedashboard simultaneously for the facilitator who moderated the discussion, clarified concepts, and discussed all questions with the groups. The fourth TBL step, the peer‐evaluation, was performed at the end of the TBL session on paper sheets. The students rated their team members' contribution to the discussion by distributing a total of 100 points to their team members according to Fink's (“Divide up the Money”) method. The students were not forced to assign different point values to their team members. They could also provide written feedback. The results of the sub‐components of the TBL were not taken into account in the students’ final grade. Traditional small group interactive learning seminars (control) The traditional SIL seminars in the obstetrics and gynecology clerkship were 3 hours long and based on clinical scenarios. In each seminar, approximately 10 medical students (in three of the four hospitals) and 20 (in the fourth hospital) participated. As in TBL, the students had a preparation phase where they had to prepare four to five predefined clinical scenarios regarding history taking, clinical exam and investigations, differential diagnosis and treatment. The cases were then discussed between students and with the facilitator of the seminar. Outcomes and the measurement tools The primary outcome was knowledge acquisition and retention assessed through final examination scores. The final examination for the course was a theoretical test which combined single best answer questions (10 items) with short answer questions (11 items) and had a maximum score of 52.5 points. In the final exam, there were questions related to both types of seminars (7.5 points for the Bleeding during pregnancy seminar and 14 points for the Abnormal uterine bleeding seminar). The secondary outcomes were student satisfaction and engagement. For all teaching sessions the students completed a self‐reported 15‐item questionnaire on satisfaction and engagement (Appendix ). “A Scoring Guide for the Student Self‐report of Engagement Measure”, which is a validated tool, was used to measure engagement. Student satisfaction with the specific two seminars was assessed using Student Satisfaction Subscale – part of the validated tool. All the questions were answered anonymously using a five‐point Likert scale (1 = strongly disagree, 5 = strongly agree). A subanalysis of the iRAT and tRAT results was performed to better understand the students’ learning process in TBL sessions. Statistical analyses Frequencies and proportions were used for the description of sample characteristics. For continuous numerical variables, mean and standard deviations (SD) or median and quartiles were calculated. Mann–Whitney U ‐test was used to compare differences between the outcomes of TBL and SIL. A two‐sided P ‐value <0.05 was considered significant. All analyses were performed using IBM SPSS Statistics software version 24.0 (IBM Corp. Armonk, NY, USA). RESULTS A total of 157 students rotated through the obstetrics and gynecology clerkship during Autumn 2022. The mean age of the students was 27.4 years (SD 4.3) and 65.5% (103/157) were females. A total of 148 students attended the TBL and SIL seminars, and 132 of them answered the questionnaires regarding student engagement and satisfaction. 3.1 Knowledge acquisition and retention There were no statistically significant differences between TBL and SIL seminars regarding student knowledge acquisition and retention when comparing final exam scores of the respective item. The median value of the exam items from TBL seminar was 6.5 (4.0–12.5) and the median value of item from the SIL seminar was 6.5 (4.5–11.5). 3.2 Student satisfaction Table shows the median scores for the participating students’ satisfaction for TBL seminars and for the SIL seminars. No significant differences were found between the two teaching methods, except for “The way the facilitator led the seminar is suitable for the way I learn”, with the students preferring SIL. 3.3 Student engagement Table shows the median scores for the participating students’ engagement for TBL seminars and for the SIL seminars. There was a significant difference in favor of the TBL regarding “I talked in class with other students about teaching material”. No other significant differences between the two teaching methods were found. 3.4 Learning process in TBL session The median scores for iRAT were 60% (40%–70%) and the median scores for tRAT were 80% (70%–90%). The tRAT scores were significantly higher than the iRAT scores ( P < 0.01). Nineteen of the 24 teams had total team scores that were higher than, or equal to, the score of the team's best member. Knowledge acquisition and retention There were no statistically significant differences between TBL and SIL seminars regarding student knowledge acquisition and retention when comparing final exam scores of the respective item. The median value of the exam items from TBL seminar was 6.5 (4.0–12.5) and the median value of item from the SIL seminar was 6.5 (4.5–11.5). Student satisfaction Table shows the median scores for the participating students’ satisfaction for TBL seminars and for the SIL seminars. No significant differences were found between the two teaching methods, except for “The way the facilitator led the seminar is suitable for the way I learn”, with the students preferring SIL. Student engagement Table shows the median scores for the participating students’ engagement for TBL seminars and for the SIL seminars. There was a significant difference in favor of the TBL regarding “I talked in class with other students about teaching material”. No other significant differences between the two teaching methods were found. Learning process in TBL session The median scores for iRAT were 60% (40%–70%) and the median scores for tRAT were 80% (70%–90%). The tRAT scores were significantly higher than the iRAT scores ( P < 0.01). Nineteen of the 24 teams had total team scores that were higher than, or equal to, the score of the team's best member. DISCUSSION In this study we wanted to evaluate the impact of introducing TBL, an increasingly popular pedagogical method in medical education, for teaching clinical disciplines in medical school. We could not show superiority of TBL over SIL in student knowledge acquisition and retention methods during the obstetrics and gynecology clerkship for undergraduate medical students. Neither could we observe statistically significant differences in student self‐reported satisfaction and engagement. Of the 15‐item questionnaire only “The way the facilitator led the seminar is suitable for the way I learn” was favored in SIL and “I talked in class with other students about teaching material” was favored in TBL. Due to multiple testing, these results should be interpreted with caution. Most of the previously published literature compares TBL with traditional lectures. Only a few studies have compared the benefits of TBL with seminars in clinical disciplines, but their results show no differences in knowledge acquisition between groups , or any significantly improved performance in the key feature problem examination. One study showed significantly improved knowledge acquisition, but no difference in long term knowledge retention between these teaching methods. In obstetrics and gynecology, the implementation of TBL has so far been studied only in comparison with traditional lectures or no comparator at all. , , The research findings are also inconsistent with one study reporting no differences in knowledge acquisition and one finding improvements in national board test performance but not in knowledge retention. To our knowledge, no studies have compared TBL with SIL in this medical specialty. The methods used to assess knowledge acquisition and retention vary considerably across different studies from final exam scores to national board exam scores. Although many studies show improvement in knowledge acquisition and retention with TBL, there are some that are in concordance with our results. , , The group discussions in TBL allow both intra‐team and inter‐team debating. Immediate feedback provided during readiness assessment process is expected to enhance individual learning as well as team communication process. We examined whether students benefit from the team interactions in TBL, which is represented by the gain in scores from iRAT to tRAT. The tRAT scores were significantly higher than iRAT scores. The average group scores were 23% higher than the individual scores, which suggests that peer‐learning is an efficient method of learning. However, the overall team scores surpassed the score of the team's best member in only 50% of the cases. This can be explained by the short duration of the obstetrics and gynecology clerkship, which is only 6 weeks long, and mostly relies on bedside learning, not allowing enough time for the teams to mature and become the highly functional teams as described by Michaelsen et al. Our students were relatively new to TBL as a teaching method and had limited experiences with TBL sessions, which may explain the relatively low iRAT scores. A study by Carasco et al. showed that prior experience with TBL improves both iRAT and tRAT scores especially among weaker students. Although there are several studies reporting increased students’ satisfaction with TBL, these results could not be replicated in our study. That could be partially explained by having different comparators , , or no comparators at all , , in previous studies. There was one item that was statistically significantly favored by students in SIL seminars compared with TBL seminars. This was: “The way the facilitator led the seminar is suitable for the way I learn”. Due to multiple testing these results should be interpreted with caution. However, we can speculate that students would discuss clinical cases with their peers during clinical clerkships rather than taking a more theoretical approach. Furthermore, TBL relies on the ambiguity of the application exercises, which are meant to stimulate intra‐team and inter‐team discussions. Medical students may find this confusing, especially if they are used to get answers from clinical experts during seminars in previous clerkships. Previous studies show a higher level of student engagement in TBL seminars in clinical disciplines when compared with traditional teaching methods, such as lectures and case‐based discussion seminars. , , Our results showed no statistically significant difference in student engagement between TBL and SIL. This may be explained by a high level of engagement in the discussions already present in the SIL seminars. However, in contrast to other small group interactive learning methods, in TBL a single qualified expert can facilitate several small groups of students in a relatively large lecture room. In our study, the benefit of TBL was mainly limited to higher student to teacher ratio compared with SIL. Our findings are not in concordance with what has been reported previously from preclinical disciplines in medical schools , or other nonmedical subject areas. , This suggests that the findings from such studies may not be directly applicable to clinical disciplines due to their inherent complexity in learning context that includes use of a variety of teaching methods. Our study showed that an increased students to teacher ratio could be accommodated in TBL without compromising learning outcomes and student satisfaction. Therefore, TBL can be particularly advantageous in decreasing faculty workload, since it can be extended to larger groups without losing its effectiveness provided there are suitable rooms available for TBL sessions. The main strength of this study is its crossover design with randomized allocation of the seminars ensuring similar demography of the groups for both teaching methods. Another strength is that the TBL concept was applied as recommended by Haidet et al. with no modifications, so that our results can be compared with results from other clinical disciplines. Our study has some limitations. First, a priori sample size/power calculation was not performed, since we intended to include all eligible students that attended the obstetrics and gynecology clerkships during one semester. However, our sample size compares favorably with previous studies with a similar design in clinical clerkships. , , , Secondly, the limited number of exam questions in the final exam and the different weightage of scores in the two seminars could impact the results. However, that would be expected to impact both TBL and SIL equally. Thirdly, the cross‐over design could have a carry‐on effect on the groups that had the TBL seminars first. CONCLUSION In this study, TBL was not superior to SIL in terms of undergraduate medical students’ knowledge acquisition and retention as well as their satisfaction and engagement in the obstetrics and gynecology course. However, as TBL had a higher student to teacher ratio (double) than SIL, its implementation might decrease the faculty workload without adversely affecting the students’ knowledge acquisition/retention and satisfaction. Irene Sterpu, Lotta Herling, Ganesh Acharya and Jonas Nordquist designed the protocol for the study. All authors planned the TBL seminars together. The data analysis was performed by Irene Sterpu. All authors contributed to article revision and approved the submitted version. The authors have stated explicitly that there are no conflicts of interest in connection with this article. The study protocol was reviewed by the Swedish Ethical Review Authority and granted exempt status on June 6, 2022 (Ref. Dnr: 2022‐02891‐01). All students participating in the study provided written informed consent. Appendix S1. |
Baseline Survey of JPHC Study – Design and Participation Rate | 73acca17-2c61-4baa-ab11-ed754e5d2d7e | 11858361 | Surgical Procedures, Operative[mh] | The data collection from cohort subjects at baseline is the core work for prospective study as well as follow-up. The data obtained from questionnaire is usually served as baseline data in most prospective studies. However these data are subjective and can be unreliable in some factors. More objective information is useful and thus human materials such as blood, tissue and toenail are frequently collected and stored for future use in prospective studies.
The Japan Public Health Center-based prospective Study on cancer and cardiovascular diseases (JPHC Study) was initiated by 4 population cohorts and a health checkup cohort (Cohort I) in 1990 and it merged 5 population cohorts and two Suita city cohorts (Cohort II) in 1993. Subjects Cohort I As of January 1, 1990, we established a population-based cohort of 54,498 residents (27,063 men and 27,435 women) who registered their address in 14 administrative districts (city, town or village) supervised by four Public Health Center (PHC) areas: 12,291 from Ninohe city and Karumai town in Ninohe PHC area, Iwate prefecture, 15,782 from Yokote city and Omonogawa town in Yokote PHC area, Akita prefecture, 12,219 from Usuda, Saku, Koumi, Kawakami towns and Yachiho, Minami-aiki, Kita-aiki, Minami-maki villages in Saku PHC areas, Nagano prefecture, 14,206 from Gushikawa city and Onna village in Ishikawa PHC area, Okinawa prefecture, and who was born from January 1, 1930 to December 31, 1949 (40 to 59 years of age). We further included 7,097 participants (2,919 men and 4,178 women) of health checkup program in Katshushika PHC area in Tokyo metropolis from the fiscal year 1990 to 1994 (2,440 in 1990, 2,211 in 1991, 173 in 1992, 1,033 in 1993, and 1,240 in 1994), in which all residents with 40 and 50 year-old were invited. Five PHC areas were selected based on variation in mortality rate of stomach cancer for our previous ecological study, in which randomly selected subjects were intensively examined , , ) . Cohort II As of January 1, 1993, we established a population-based cohort of 62,398 residents (30,651 men and 31,747 women) who registered their address in 13 administrative districts (city, town or village) supervised by five PHC areas: 21,488 from Tomobe town and Iwase town in Kasama PHC area, Ibaraki prefecture, 3,571 from Oguni town in Kashiwazaki PHC area, Niigata prefecture, 8,606 from Kagami town and Noichi town in Tosayamada PHC area, Kochi prefecture, 14,624 from Uku town, Ojika town, Shin-uonome town, Arikawa town, Kamigoto town and Narao town in Arikawa PHC area, Nagasaki prefecture, and 14,109 from Hirara city and Gusukube town in Miyako PHC area, Okinawa prefecture, and who was born from January 1, 1923 to December 31, 1952. In Suita city in Suita PHC area, Osaka prefecture, two different cohorts were set up. The first cohort (Suita 1) was defined as all 9,747 residents (4,793 men and 4,954 women) in Suita city with 40 or 50 year-old in the fiscal year 1993, because they were invited to the comprehensive health checkup program conducted by the city. The second cohort (Suita 2) was defined as a part of the Suita study ) , in which subjects were arbitrarily selected based on the population registry of the city, in the years 1989 through 1992 and aged 30 to 79 years, stratified by sex and 10 year age group. The 6,680 subjects (3,296 men and 3,384 women) with aged 40 to 69 years as of April 1, 1993 were used for the JPHC study. Six PHC areas were selected considering geographical distribution and feasibility. Baseline Survey Questionnaire : A self-administered questionnaire was submitted to all cohort subjects and asked to report their lifestyle such as socio-demographic situation, personal medical history, smoking and drinking history, and dietary habits. The questionnaire used in Cohort II was modified in several items. The questionnaire was distributed mostly by hand or partly by mail in 1990 (as a rule) in 4 populations in Cohort I and in 1993 (as a rule) in 5 populations in Cohort II. The incomplete answer was supplemented by telephone interview. In Katsushika PHC area, questionnaire was interviewed on the occasion of health checkup from 1990 to 1994. In Suita PHC area, the questionnaire was mailed to all the subjects from 1993 to 1995 and supplemented by interview for participants of health checkup on that occasion or by telephone interview for those who did not attend the health check-up. Blood and health checkup : A total of 10 ml blood was provided voluntarily by cohort subjects and gathered into heparinized tube in the occasion of health checkup program sponsored by each local government mostly or company in some cases. The purpose and human rights for special blood collection for JPHC Study was informed to all cohort subjects. The tube was centrifuged for 10 min at 3,500-4,000 rpm to obtain plasma and a buffy coat layer within 12 hours. The plasma and buffy layer were divided into four 1.0 ml tubes (three for plasma and one for buffy layer) and stored at -80°C. The blood was collected in 1990 to 1992 (1990 to 1994 for Katsushika PHC cohort) in Cohort I and from 1993 to 1995 in Cohort II. The data from the health checkup was also obtained in this occasion. The common items are the following: anthropometrical measures (height and weight), blood pressures (systolic and diastolic), urinalysis (protein, sugar and blood in spot urine), lipids (total cholesterol, HDL-cholesterol, trygliceride), liver function test (GOT or AST, GPT or ALT and gamma-GTP), anemia test (red blood cell count, haematcrit and hemoglobin), blood sugar, uric acid and hepatitis B virus surface antigen (HBsAg).
Cohort I As of January 1, 1990, we established a population-based cohort of 54,498 residents (27,063 men and 27,435 women) who registered their address in 14 administrative districts (city, town or village) supervised by four Public Health Center (PHC) areas: 12,291 from Ninohe city and Karumai town in Ninohe PHC area, Iwate prefecture, 15,782 from Yokote city and Omonogawa town in Yokote PHC area, Akita prefecture, 12,219 from Usuda, Saku, Koumi, Kawakami towns and Yachiho, Minami-aiki, Kita-aiki, Minami-maki villages in Saku PHC areas, Nagano prefecture, 14,206 from Gushikawa city and Onna village in Ishikawa PHC area, Okinawa prefecture, and who was born from January 1, 1930 to December 31, 1949 (40 to 59 years of age). We further included 7,097 participants (2,919 men and 4,178 women) of health checkup program in Katshushika PHC area in Tokyo metropolis from the fiscal year 1990 to 1994 (2,440 in 1990, 2,211 in 1991, 173 in 1992, 1,033 in 1993, and 1,240 in 1994), in which all residents with 40 and 50 year-old were invited. Five PHC areas were selected based on variation in mortality rate of stomach cancer for our previous ecological study, in which randomly selected subjects were intensively examined , , ) . Cohort II As of January 1, 1993, we established a population-based cohort of 62,398 residents (30,651 men and 31,747 women) who registered their address in 13 administrative districts (city, town or village) supervised by five PHC areas: 21,488 from Tomobe town and Iwase town in Kasama PHC area, Ibaraki prefecture, 3,571 from Oguni town in Kashiwazaki PHC area, Niigata prefecture, 8,606 from Kagami town and Noichi town in Tosayamada PHC area, Kochi prefecture, 14,624 from Uku town, Ojika town, Shin-uonome town, Arikawa town, Kamigoto town and Narao town in Arikawa PHC area, Nagasaki prefecture, and 14,109 from Hirara city and Gusukube town in Miyako PHC area, Okinawa prefecture, and who was born from January 1, 1923 to December 31, 1952. In Suita city in Suita PHC area, Osaka prefecture, two different cohorts were set up. The first cohort (Suita 1) was defined as all 9,747 residents (4,793 men and 4,954 women) in Suita city with 40 or 50 year-old in the fiscal year 1993, because they were invited to the comprehensive health checkup program conducted by the city. The second cohort (Suita 2) was defined as a part of the Suita study ) , in which subjects were arbitrarily selected based on the population registry of the city, in the years 1989 through 1992 and aged 30 to 79 years, stratified by sex and 10 year age group. The 6,680 subjects (3,296 men and 3,384 women) with aged 40 to 69 years as of April 1, 1993 were used for the JPHC study. Six PHC areas were selected considering geographical distribution and feasibility.
As of January 1, 1990, we established a population-based cohort of 54,498 residents (27,063 men and 27,435 women) who registered their address in 14 administrative districts (city, town or village) supervised by four Public Health Center (PHC) areas: 12,291 from Ninohe city and Karumai town in Ninohe PHC area, Iwate prefecture, 15,782 from Yokote city and Omonogawa town in Yokote PHC area, Akita prefecture, 12,219 from Usuda, Saku, Koumi, Kawakami towns and Yachiho, Minami-aiki, Kita-aiki, Minami-maki villages in Saku PHC areas, Nagano prefecture, 14,206 from Gushikawa city and Onna village in Ishikawa PHC area, Okinawa prefecture, and who was born from January 1, 1930 to December 31, 1949 (40 to 59 years of age). We further included 7,097 participants (2,919 men and 4,178 women) of health checkup program in Katshushika PHC area in Tokyo metropolis from the fiscal year 1990 to 1994 (2,440 in 1990, 2,211 in 1991, 173 in 1992, 1,033 in 1993, and 1,240 in 1994), in which all residents with 40 and 50 year-old were invited. Five PHC areas were selected based on variation in mortality rate of stomach cancer for our previous ecological study, in which randomly selected subjects were intensively examined , , ) .
As of January 1, 1993, we established a population-based cohort of 62,398 residents (30,651 men and 31,747 women) who registered their address in 13 administrative districts (city, town or village) supervised by five PHC areas: 21,488 from Tomobe town and Iwase town in Kasama PHC area, Ibaraki prefecture, 3,571 from Oguni town in Kashiwazaki PHC area, Niigata prefecture, 8,606 from Kagami town and Noichi town in Tosayamada PHC area, Kochi prefecture, 14,624 from Uku town, Ojika town, Shin-uonome town, Arikawa town, Kamigoto town and Narao town in Arikawa PHC area, Nagasaki prefecture, and 14,109 from Hirara city and Gusukube town in Miyako PHC area, Okinawa prefecture, and who was born from January 1, 1923 to December 31, 1952. In Suita city in Suita PHC area, Osaka prefecture, two different cohorts were set up. The first cohort (Suita 1) was defined as all 9,747 residents (4,793 men and 4,954 women) in Suita city with 40 or 50 year-old in the fiscal year 1993, because they were invited to the comprehensive health checkup program conducted by the city. The second cohort (Suita 2) was defined as a part of the Suita study ) , in which subjects were arbitrarily selected based on the population registry of the city, in the years 1989 through 1992 and aged 30 to 79 years, stratified by sex and 10 year age group. The 6,680 subjects (3,296 men and 3,384 women) with aged 40 to 69 years as of April 1, 1993 were used for the JPHC study. Six PHC areas were selected considering geographical distribution and feasibility.
Questionnaire : A self-administered questionnaire was submitted to all cohort subjects and asked to report their lifestyle such as socio-demographic situation, personal medical history, smoking and drinking history, and dietary habits. The questionnaire used in Cohort II was modified in several items. The questionnaire was distributed mostly by hand or partly by mail in 1990 (as a rule) in 4 populations in Cohort I and in 1993 (as a rule) in 5 populations in Cohort II. The incomplete answer was supplemented by telephone interview. In Katsushika PHC area, questionnaire was interviewed on the occasion of health checkup from 1990 to 1994. In Suita PHC area, the questionnaire was mailed to all the subjects from 1993 to 1995 and supplemented by interview for participants of health checkup on that occasion or by telephone interview for those who did not attend the health check-up. Blood and health checkup : A total of 10 ml blood was provided voluntarily by cohort subjects and gathered into heparinized tube in the occasion of health checkup program sponsored by each local government mostly or company in some cases. The purpose and human rights for special blood collection for JPHC Study was informed to all cohort subjects. The tube was centrifuged for 10 min at 3,500-4,000 rpm to obtain plasma and a buffy coat layer within 12 hours. The plasma and buffy layer were divided into four 1.0 ml tubes (three for plasma and one for buffy layer) and stored at -80°C. The blood was collected in 1990 to 1992 (1990 to 1994 for Katsushika PHC cohort) in Cohort I and from 1993 to 1995 in Cohort II. The data from the health checkup was also obtained in this occasion. The common items are the following: anthropometrical measures (height and weight), blood pressures (systolic and diastolic), urinalysis (protein, sugar and blood in spot urine), lipids (total cholesterol, HDL-cholesterol, trygliceride), liver function test (GOT or AST, GPT or ALT and gamma-GTP), anemia test (red blood cell count, haematcrit and hemoglobin), blood sugar, uric acid and hepatitis B virus surface antigen (HBsAg).
Cohort I Among 54,498 population-based cohort subjects and 7,097 health checkup cohort subjects, 43,149 (79%), 20,665 (76%) men and 22,484 (82%) women, and 7,096 (100%) (2,919 men and 4,177 women) returned their questionnaires, respectively. Although date of entry was distributed from January 1990 to May 1992 in 4 population-based cohort, 54% was concentrated between February 1990 and March 1990. Only 4% responded in 1991 or later. 17,587 (32%) population cohort, 6,556 (24%) men and 11,031 (40%) women, and 7,050 (99%) health checkup cohort, 2,901 men and 4,149 women, provided their blood and stored their plasma and buffy layer at -80. We also obtained health checkup data from 17,923 (33%) population cohort, 6,532 (24%) men and 11,391 (42%) women, and 5,388 (76%) health checkup cohort, 2,245 (77%) men and 3,143 (75%) women. Cohort II Among 62,398 population-based cohort subjects and 16,427 Suita cohort subjects, 52,256 (84%), 24,804 (81%) men and 27,452 (86%) women, and 10,960 (67%), 4,987 (62%) men and 5,973 (72%) women, returned their questionnaires, respectively. Although date of entry was distributed from January 1993 to December 1994 in 5 population-based cohorts, 67% was concentrated between February 1993 and March 1993 and the other subjects except subjects in Tomobe town, Kasama PHC area, responded within 1993. The baseline survey for 11,314 subjects in Tomobe town was conducted in 1994 and more than 99% responded in January or February in 1994. 18,894 (30%) population cohort, 6,582 (21%) men and 12,312 (39%) women, and 5,480 (33%) Suita cohort, 2,120 (26%) men and 3,360 (40%) women, provided their blood and stored their plasma and buffy layer at -80. We also obtained health checkup data from 19,292 (31%) population cohort, 6,476 (21%) men and 12,312 (40%) women, and 5,307 (32%) health checkup cohort, 1,993 (25%) men and 3,314 (40%) women.
, , ) Among 54,498 population-based cohort subjects and 7,097 health checkup cohort subjects, 43,149 (79%), 20,665 (76%) men and 22,484 (82%) women, and 7,096 (100%) (2,919 men and 4,177 women) returned their questionnaires, respectively. Although date of entry was distributed from January 1990 to May 1992 in 4 population-based cohort, 54% was concentrated between February 1990 and March 1990. Only 4% responded in 1991 or later. 17,587 (32%) population cohort, 6,556 (24%) men and 11,031 (40%) women, and 7,050 (99%) health checkup cohort, 2,901 men and 4,149 women, provided their blood and stored their plasma and buffy layer at -80. We also obtained health checkup data from 17,923 (33%) population cohort, 6,532 (24%) men and 11,391 (42%) women, and 5,388 (76%) health checkup cohort, 2,245 (77%) men and 3,143 (75%) women.
, , ) Among 62,398 population-based cohort subjects and 16,427 Suita cohort subjects, 52,256 (84%), 24,804 (81%) men and 27,452 (86%) women, and 10,960 (67%), 4,987 (62%) men and 5,973 (72%) women, returned their questionnaires, respectively. Although date of entry was distributed from January 1993 to December 1994 in 5 population-based cohorts, 67% was concentrated between February 1993 and March 1993 and the other subjects except subjects in Tomobe town, Kasama PHC area, responded within 1993. The baseline survey for 11,314 subjects in Tomobe town was conducted in 1994 and more than 99% responded in January or February in 1994. 18,894 (30%) population cohort, 6,582 (21%) men and 12,312 (39%) women, and 5,480 (33%) Suita cohort, 2,120 (26%) men and 3,360 (40%) women, provided their blood and stored their plasma and buffy layer at -80. We also obtained health checkup data from 19,292 (31%) population cohort, 6,476 (21%) men and 12,312 (40%) women, and 5,307 (32%) health checkup cohort, 1,993 (25%) men and 3,314 (40%) women.
The JPHC Study includes 29 study sub-areas (city, town or village level) from 11 PHC area. These sub-areas are divided into 3 types: urban city with over 300,000 populations (Katsushika ward and Suita city), local city with over 20,000 populations (Ninohe, Yokote, Gushikawa and Hirara cities), rural town or village (the other 23 towns or villages). The questionnaire and the collection of blood and checkup data were applied only for participants of health checkup program in Katsushika area and therefore the response rate was substantially higher than the other areas. Although the method applied in two sub-areas in Suita city were slightly different from population-based cohort, the subjects were both extracted from population registry and not participants of health checkup program. The response rates to the questionnaire tended to be higher (87%) in 23 rural town or villages, lower (66%) in urban city (Suita city) and intermediate (73%) in 4 local cities. The same trend was also observed for the collection rates of blood and checkup data which were only collected from the participants of local healthcheck up program, although the rates were lower in a Gushikawa city when compared with other local cities. The response rates were consistently lower in men than in women. The sexual differences were more marked in the collection rates of blood and checkup data. This is because women tend to participate in health checkup program provided by local government which is our major source of checkup data and blood collections. Men usually take a health checkup in their workplace. In summary, among 140,420 targeted subjects, we created a database with information from 113,461 (81%) questionnaires regarding lifestyle and from 47,910 (34%) health checkup data. We also stored 3 aliquotes of plasma and one aliquote of buffy coat from 49,011 subjects. These data and blood samples serve as basis for JPHC Study.
|
Periodontal Evaluation for a New Alkasite Restorative Material in Noncarious Cervical Lesions: A Randomized‐Controlled Clinical Trial | 2364db23-2e00-4e37-b1ca-5edbd42c7c9f | 11471885 | Dentistry[mh] | Introduction Noncarious cervical lesions (NCCLs) arise as a result of the loss of a tooth's hard tissues from the cervical region through processes unrelated to the biofilm. About 46% of adults have NCCLs, whereas their prevalence increases in older populations (Teixeira et al. ). The genesis of NCCLs could be attributed to several factors. Predominantly, these factors include abrasive toothbrushing practices and consumption of acidic dietary substances; the effect of occlusal factors is nonetheless not conclusive (Goodacre, Eugene Roberts, and Munoz ). Treatment of NCCLs is indicated for esthetic and hypersensitivity‐related reasons. A restorative, surgical, or combined approach could be considered. The treatment plan for NCCLs should also target their etiologies to interrupt lesions' development (Goodacre, Eugene Roberts, and Munoz ). To restore NCCLs, glass ionomer cement (GIC) and resin composite (RC) are commonly used. A recent meta‐analysis showed no significant difference between the two materials regarding all the following parameters: marginal discoloration, marginal adaptation, secondary caries, color, anatomic form, and surface texture. In terms of the retention rate, GIC showed significantly better performance (Bezerra et al. ). In terms of the gingival‐related parameters adjacent to dental materials, these clinical outcomes have not been studied well in the long term. Nevertheless, some studies found that different restorative materials have different effects on the subgingival biofilm; a case in point is that amalgam and GIC have a better effect on the combination of a subgingival biofilm when compared with RC. Despite this, the clinical manifestations of the phenomenon are still not clear. (Paolantonio et al. ; Santos et al. ). One cross‐sectional study found that NCCLs restored with RC had a significantly higher percentage of bleeding sites compared with nonrestored NCCLs, which could be attributed to the increase in plaque accumulation (Gurgel et al. ). Cention N (CN) (Ivoclar‐Vivadent, Liechtenstein) is a relatively new alkasite restorative material and it is the first available bioactive RC (Tiskaya et al. ). The liquid of CN consists of four different monomers, chemo polymerization, and photopolymerization activators. It does not contain any acidic monomers or water. The powder consists of reactive and nonreactive fillers such as barium aluminum silicate glass, ytterbium trifluoride, iso filler, calcium barium aluminum fluorosilicate glass, and calcium fluorosilicate glass (Tiskaya et al. ). The calcium ion release of Cention N was found to be the highest among 13 other restorative materials, and the fluoride and hydroxyl ion release was found to be acceptable (Ruengrungsom et al. ). Although Cention N is classified as RC, its ion release property may have a positive impact on the subgingival biofilm and may subsequently show a better gingival response compared with traditional RC, as this property is believed to be associated with an increase in the biofilm pH degree and an antibacterial effect (Daabash et al. ; Wiriyasatiankun, Sakoolnamarka, and Thanyasrisung ). The mechanical and ion‐releasing properties of this material have been evaluated in vitro (Tiskaya et al. ). However, only a few randomized‐controlled trials of Cention N are available, whereas, to the best of our knowledge, the gingival‐related clinical performance of Cention N in NCCLs has not been reported in the dental literature. According to the manufacturer's instructions, one of Cention N's indications is NCCLs. These lesions could often be in contact with compromised gingival tissue due to gingival recession (Naik, Jacob, and Nainar ). Studying Cention N's gingival‐related clinical outcomes could provide clinicians with information about its performance, especially in such areas, and shed light on the potential benefits of ion‐releasing materials for gingival health. The aim of this clinical trial is to evaluate the periodontal response to Cention N restorations applied with or without an adhesive system in comparison with the standard restorative material for NCCLs: RM‐GIC (Bezerra et al. ). The null hypothesis tested was that Cention N with or without an adhesive system yields the same periodontal response as RM‐GIC.
Materials and Methods 2.1 Protocol Registration, and Ethics Approval This study design followed the Consolidated Standards of Reporting Trials (CONSORT) statement (Schulz, Altman, and Moher ). Ethical approval was obtained from Damascus University no./2777/2021. Written informed consent was obtained from all participants before starting the restorative procedures after a thorough explanation of the study aims, procedures, risks, and benefits was provided. The study design of this trial was registered at clinicaltrials.org , NCT05593159. 2.2 Study Design, and Sittings This trial is a double‐blind, split‐mouth randomized‐controlled trial. Clinical procedures and follow‐ups were conducted at the Department of Restorative Dentistry, Faculty of Dentistry, Damascus University, Syria, during the period from May 2022 to November 2023. 2.3 Sample Size and Recruitment To determine the sufficient sample size for the study, a pilot study with the same design as the present study was conducted. The pilot study included six participants who were evaluated for the retention of restorations after 9 months. PASS software (RRID:SCR_019099) was used for sample size calculation. With an α of 0.05, a power of 90%, and a two‐sided test, utilizing the Paired Wilcoxon Signed‐Rank Test, the minimal sample size was 19 in each group. After including a 20% dropout rate, the sample size per arm was 24. The inclusion criteria for patients were as follows: 18 years of age or older, in good general and periodontal health, with no abnormal tooth mobility or deep pockets, acceptable oral hygiene (teeth brushing at least once a day, and no generalized plaque accumulation), having at least 20 teeth under occlusion, and presence of three or more NCCLs that are deeper than 1 mm and involve both the enamel and dentin of vital teeth. The exclusion criteria were as follows: pregnancy, lactation, active severe bruxism habits, or xerostomia (Loguercio et al. ; Santos et al. ). Patients were examined by two calibrated qualified restorative dentistry specialists to determine if they fulfilled the inclusion criteria. 2.4 Random Sequence Generation and Allocation Concealment Intra‐individual randomization was carried out using Microsoft Excel for Windows (RRID:SCR_016137); thus, every patient received three restorations randomly, one from each arm of the study. Randomization results were concealed from the operator using sequentially numbered, opaque, sealed envelopes. These envelopes were not opened until the patient's arrival. Restorative dental procedures were performed on allocated lesions starting with the tooth of the lowest number (universal numbering system) and moving clockwise. 2.5 Intervention Information about teeth brushing and oral and dietary habits was obtained to help instruct each patient on how to maintain their oral health and avoid traumatic brushing. At each follow‐up, the patients were reinstructed to ensure their adherence. Gingival health parameters were measured using a CP15 UNC probe (548/4 Medesy; Maniago, Italy). These parameters were plaque accumulation using the plaque index (PI) introduced by Silness and Loe (Table ) (Silness and Löe ), probing depth (PD) in millimeters, and bleeding on probing (BOP) (yes or no). Both PD and BOP were measured at three points for each tooth (mesiobuccal, mid‐buccal, and distobuccal). All operative procedures were conducted by one operator. To calibrate the procedures, five restorations per study group were placed following the protocol of this study. These restorations were not included in the study. Before restorative procedures, local anesthesia was administered using 3% Mepivacaine. Afterward, the tooth surface was cleaned using pumice (ProphPaste Pro‐N100, Switzerland) in a rubber cup under water cooling. Then, a retentive cord (Sure‐cord, Korea) was placed with minimum pressure using Gingival Cord Packer Universal (585, Medesy, Italy) and a rubber dam was placed. HYGENIC dental dam B clamp sets (Coltene/Whaledent, Inc. USA) were used to facilitate retraction of gingival tissue. Each patient received a single RM‐GIC and two Cention N restorations (Batch No. Z01V4K). In terms of Cention N restorations, the manufacturing company recommends using Cention N (Ivoclar‐Vivadent, Liechtenstein, Switzerland) with a universal adhesive system (UA) in nonretentive cavities or with no adhesive system in retentive cavity preparations. Therefore, before application of restoration, half of the NCCLs planned to be restored by Cention N were pretreated as follows: First, UA (Tetric N‐Bond Universal, Liechtenstein) was applied by a micro‐brush and rubbed against the tooth surface for 20 s. Second, an air spray was gently applied to disperse the adhesive until a glossy, firm layer resulted. Third, the adhesive was light‐cured for ten seconds (1200 mW/cm 2 ) using woodpecker curing light LED‐F, China, which was calibrated before each use. The other half was treated as follows: NCCLs' surfaces were roughened gently using a round carbide bur (H1SEM.204.014 VPE5 or H1SEM.204.016 VPE 5, Komet Dental, Lemgo, Germany) on a low‐speed handpiece (Each bur was used for no more than five lesions). Thereafter, a fine gingival retentive groove (RG) was made approximately 0.5 mm from the dentin–enamel junction by a small round carbide bur (H1SEM.205.010 VPE 5, Komet Dental, Lemgo, Germany). The preparation was carried out at a speed of 2000 rpm without water cooling and with low pressure (each bur was used for no more than five lesions). After the pretreatment of the surface, Cention N was applied. In cavities with RGs, a small portion of the mixture was placed first in the RG, and then the rest of it was placed to fill the cavity and restore the tooth form (working time 3 min). As for cavities with no retentive feature, they were bulk‐filled with the filling material. Cention N was light‐cured for 20 s (1200 mW/cm 2 ). Afterward, finishing and removing of excess material were carried out using fine and extra‐fine diamond burs (8852.314.014 or 852EF.314.014, Komet, Gebr. Brasseler GmbH & Co. Germany) under water cooling. Polishing was performed using Optrapol (Ivoclar‐Vivadent, Liechtenstein). RM‐GIC restorations were placed according to the following protocol: after washing and drying, but not desiccating, Dentin Conditioner 20% (GC, Japan) was applied and rinsed. Afterward, an RM–GIC Fuji II LC (GC, Japan) mixture was placed in a single layer in cavities less than 2 mm in depth. As for cavities that are more than 2 mm in depth, the layering technique was used (each layer was light cured for 20 s). Finishing and polishing were performed the same way as in the Cention N groups. 2.6 Calibration Procedures for Clinical Evaluation Two experienced dentists, neither of whom was the clinical operator in this study, were trained as follows: they assessed the periodontal health of 10 class V restorations according to the assessment methods used in this study. However, these restorations were not a part of the restorations performed for this study. At each follow‐up, one of the two assessors evaluated the periodontal health. 2.7 Clinical Evaluation For each follow‐up, a new form was completed. Evaluators were blinded to previous evaluations during the follow‐up sessions. The periodontal evaluation after the placement of the restorations was carried out the same way as that before the placement. BOP, PI, and PD indexes were used for this purpose. Periodontal health was evaluated at baseline, 1 week after restorations' application, and after 3, 6, 9, and 12 months. 2.8 Statistical Analysis Statistical Package for Social Sciences (SPSS) software for Windows V25.0 (RRID:SCR_002865) was used to analyze the data. Data normality was checked using the Shapiro–Wilk test ( p < 0.001). Comparison tests were performed using the following tests: Friedman's test for intra‐ and intergroup comparisons for plaque accumulation and PD, the Cochran test for intra‐ and intergroup comparisons for gingival bleeding, the Wilcoxon Signed Rank for intra‐ and intergroup comparisons between two groups or two follow‐ups for PD, and the McNemar test for intergroup comparison of proportions of variables between two groups for gingival bleeding. The significance level ( α ) was set to 0.05 or less.
Protocol Registration, and Ethics Approval This study design followed the Consolidated Standards of Reporting Trials (CONSORT) statement (Schulz, Altman, and Moher ). Ethical approval was obtained from Damascus University no./2777/2021. Written informed consent was obtained from all participants before starting the restorative procedures after a thorough explanation of the study aims, procedures, risks, and benefits was provided. The study design of this trial was registered at clinicaltrials.org , NCT05593159.
Study Design, and Sittings This trial is a double‐blind, split‐mouth randomized‐controlled trial. Clinical procedures and follow‐ups were conducted at the Department of Restorative Dentistry, Faculty of Dentistry, Damascus University, Syria, during the period from May 2022 to November 2023.
Sample Size and Recruitment To determine the sufficient sample size for the study, a pilot study with the same design as the present study was conducted. The pilot study included six participants who were evaluated for the retention of restorations after 9 months. PASS software (RRID:SCR_019099) was used for sample size calculation. With an α of 0.05, a power of 90%, and a two‐sided test, utilizing the Paired Wilcoxon Signed‐Rank Test, the minimal sample size was 19 in each group. After including a 20% dropout rate, the sample size per arm was 24. The inclusion criteria for patients were as follows: 18 years of age or older, in good general and periodontal health, with no abnormal tooth mobility or deep pockets, acceptable oral hygiene (teeth brushing at least once a day, and no generalized plaque accumulation), having at least 20 teeth under occlusion, and presence of three or more NCCLs that are deeper than 1 mm and involve both the enamel and dentin of vital teeth. The exclusion criteria were as follows: pregnancy, lactation, active severe bruxism habits, or xerostomia (Loguercio et al. ; Santos et al. ). Patients were examined by two calibrated qualified restorative dentistry specialists to determine if they fulfilled the inclusion criteria.
Random Sequence Generation and Allocation Concealment Intra‐individual randomization was carried out using Microsoft Excel for Windows (RRID:SCR_016137); thus, every patient received three restorations randomly, one from each arm of the study. Randomization results were concealed from the operator using sequentially numbered, opaque, sealed envelopes. These envelopes were not opened until the patient's arrival. Restorative dental procedures were performed on allocated lesions starting with the tooth of the lowest number (universal numbering system) and moving clockwise.
Intervention Information about teeth brushing and oral and dietary habits was obtained to help instruct each patient on how to maintain their oral health and avoid traumatic brushing. At each follow‐up, the patients were reinstructed to ensure their adherence. Gingival health parameters were measured using a CP15 UNC probe (548/4 Medesy; Maniago, Italy). These parameters were plaque accumulation using the plaque index (PI) introduced by Silness and Loe (Table ) (Silness and Löe ), probing depth (PD) in millimeters, and bleeding on probing (BOP) (yes or no). Both PD and BOP were measured at three points for each tooth (mesiobuccal, mid‐buccal, and distobuccal). All operative procedures were conducted by one operator. To calibrate the procedures, five restorations per study group were placed following the protocol of this study. These restorations were not included in the study. Before restorative procedures, local anesthesia was administered using 3% Mepivacaine. Afterward, the tooth surface was cleaned using pumice (ProphPaste Pro‐N100, Switzerland) in a rubber cup under water cooling. Then, a retentive cord (Sure‐cord, Korea) was placed with minimum pressure using Gingival Cord Packer Universal (585, Medesy, Italy) and a rubber dam was placed. HYGENIC dental dam B clamp sets (Coltene/Whaledent, Inc. USA) were used to facilitate retraction of gingival tissue. Each patient received a single RM‐GIC and two Cention N restorations (Batch No. Z01V4K). In terms of Cention N restorations, the manufacturing company recommends using Cention N (Ivoclar‐Vivadent, Liechtenstein, Switzerland) with a universal adhesive system (UA) in nonretentive cavities or with no adhesive system in retentive cavity preparations. Therefore, before application of restoration, half of the NCCLs planned to be restored by Cention N were pretreated as follows: First, UA (Tetric N‐Bond Universal, Liechtenstein) was applied by a micro‐brush and rubbed against the tooth surface for 20 s. Second, an air spray was gently applied to disperse the adhesive until a glossy, firm layer resulted. Third, the adhesive was light‐cured for ten seconds (1200 mW/cm 2 ) using woodpecker curing light LED‐F, China, which was calibrated before each use. The other half was treated as follows: NCCLs' surfaces were roughened gently using a round carbide bur (H1SEM.204.014 VPE5 or H1SEM.204.016 VPE 5, Komet Dental, Lemgo, Germany) on a low‐speed handpiece (Each bur was used for no more than five lesions). Thereafter, a fine gingival retentive groove (RG) was made approximately 0.5 mm from the dentin–enamel junction by a small round carbide bur (H1SEM.205.010 VPE 5, Komet Dental, Lemgo, Germany). The preparation was carried out at a speed of 2000 rpm without water cooling and with low pressure (each bur was used for no more than five lesions). After the pretreatment of the surface, Cention N was applied. In cavities with RGs, a small portion of the mixture was placed first in the RG, and then the rest of it was placed to fill the cavity and restore the tooth form (working time 3 min). As for cavities with no retentive feature, they were bulk‐filled with the filling material. Cention N was light‐cured for 20 s (1200 mW/cm 2 ). Afterward, finishing and removing of excess material were carried out using fine and extra‐fine diamond burs (8852.314.014 or 852EF.314.014, Komet, Gebr. Brasseler GmbH & Co. Germany) under water cooling. Polishing was performed using Optrapol (Ivoclar‐Vivadent, Liechtenstein). RM‐GIC restorations were placed according to the following protocol: after washing and drying, but not desiccating, Dentin Conditioner 20% (GC, Japan) was applied and rinsed. Afterward, an RM–GIC Fuji II LC (GC, Japan) mixture was placed in a single layer in cavities less than 2 mm in depth. As for cavities that are more than 2 mm in depth, the layering technique was used (each layer was light cured for 20 s). Finishing and polishing were performed the same way as in the Cention N groups.
Calibration Procedures for Clinical Evaluation Two experienced dentists, neither of whom was the clinical operator in this study, were trained as follows: they assessed the periodontal health of 10 class V restorations according to the assessment methods used in this study. However, these restorations were not a part of the restorations performed for this study. At each follow‐up, one of the two assessors evaluated the periodontal health.
Clinical Evaluation For each follow‐up, a new form was completed. Evaluators were blinded to previous evaluations during the follow‐up sessions. The periodontal evaluation after the placement of the restorations was carried out the same way as that before the placement. BOP, PI, and PD indexes were used for this purpose. Periodontal health was evaluated at baseline, 1 week after restorations' application, and after 3, 6, 9, and 12 months.
Statistical Analysis Statistical Package for Social Sciences (SPSS) software for Windows V25.0 (RRID:SCR_002865) was used to analyze the data. Data normality was checked using the Shapiro–Wilk test ( p < 0.001). Comparison tests were performed using the following tests: Friedman's test for intra‐ and intergroup comparisons for plaque accumulation and PD, the Cochran test for intra‐ and intergroup comparisons for gingival bleeding, the Wilcoxon Signed Rank for intra‐ and intergroup comparisons between two groups or two follow‐ups for PD, and the McNemar test for intergroup comparison of proportions of variables between two groups for gingival bleeding. The significance level ( α ) was set to 0.05 or less.
Results The study sample consisted of 25 patients. A total of 28 patients were examined and their eligibility to participate in the research was determined according to the inclusion and exclusion criteria; 25 of these patients were finally included. The characteristics of the research subjects and NCCLs in this study are shown in Tables and . Each patient received three restorations; thus, the total sample size in the study comprised 75 teeth (Figure ). The response rate was 100% during all follow‐up periods; however, missing data were found for two patients at the 9‐month follow‐up. 3.1 PI The results of the statistical analysis conducted on the PI data show that there is no significant difference between the groups at each follow‐up ( p > 0.05) (Supporting Information S1: Table ). As for the difference between the follow‐ups for each group, the increase noted for all groups over time was not significant (mean increase = 0.06) ( p > 0.05) (Supporting Information S1: Table ). 3.2 PD As for the PD, there was no significant difference between the groups at all follow‐ups ( p > 0.05), except for the T3 follow‐up (Supporting Information S1: Table ), as the significant difference was between CN + UA and RM‐GIC ( p = 0.022) (Supporting Information S1: Table ). On comparing the follow‐ups in each group, a significant difference was noted for both CN + AU ( p = 0.005) and CN + RG ( p = 0.001) groups (Table ), with a mean increase of 0.34 between T 0 and T 4 for the three groups. To determine if the increase in the mean values of PD was significant after 1 week and after 9 months, the Wilcoxon Signed Ranks Test was conducted. There was a significant difference between the follow‐ups before the intervention on the one hand and after 1 week and 9 months on the other hand for the three groups ( p > 0.05), except for the comparison between T 0 and T 4 in the RM‐GIC group, where the difference was almost significant ( p = 0.053). Listwise deletion was applied. 3.3 BOP Regarding BOP, there was no significant difference between the groups in any follow‐up period ( p > 0.05), as shown in Table . However, there was a significant difference in both CN + AU ( p = 0.027) and RM‐GIC ( p < 0.001) groups on comparing the follow‐ups. To find out if the increase in the percentage of positive BOP sites was significant after 1 week, the Cochran Test was conducted (Table ). The test showed that the increase was significant for the CN + AU ( p = 0.035) and RM‐GIC ( p = 0.003) groups. However, the BOP sites' percentage decreased to almost the pre‐intervention level during the 3‐, 6‐, and 9‐month follow‐ups for all groups.
PI The results of the statistical analysis conducted on the PI data show that there is no significant difference between the groups at each follow‐up ( p > 0.05) (Supporting Information S1: Table ). As for the difference between the follow‐ups for each group, the increase noted for all groups over time was not significant (mean increase = 0.06) ( p > 0.05) (Supporting Information S1: Table ).
PD As for the PD, there was no significant difference between the groups at all follow‐ups ( p > 0.05), except for the T3 follow‐up (Supporting Information S1: Table ), as the significant difference was between CN + UA and RM‐GIC ( p = 0.022) (Supporting Information S1: Table ). On comparing the follow‐ups in each group, a significant difference was noted for both CN + AU ( p = 0.005) and CN + RG ( p = 0.001) groups (Table ), with a mean increase of 0.34 between T 0 and T 4 for the three groups. To determine if the increase in the mean values of PD was significant after 1 week and after 9 months, the Wilcoxon Signed Ranks Test was conducted. There was a significant difference between the follow‐ups before the intervention on the one hand and after 1 week and 9 months on the other hand for the three groups ( p > 0.05), except for the comparison between T 0 and T 4 in the RM‐GIC group, where the difference was almost significant ( p = 0.053). Listwise deletion was applied.
BOP Regarding BOP, there was no significant difference between the groups in any follow‐up period ( p > 0.05), as shown in Table . However, there was a significant difference in both CN + AU ( p = 0.027) and RM‐GIC ( p < 0.001) groups on comparing the follow‐ups. To find out if the increase in the percentage of positive BOP sites was significant after 1 week, the Cochran Test was conducted (Table ). The test showed that the increase was significant for the CN + AU ( p = 0.035) and RM‐GIC ( p = 0.003) groups. However, the BOP sites' percentage decreased to almost the pre‐intervention level during the 3‐, 6‐, and 9‐month follow‐ups for all groups.
Discussion There is a strong relationship between the tooth structure and periodontal tissues; still, there are few clinical studies on this topic (Padbury, Eber, and Wang ). This randomized‐controlled trial sheds light on an important topic that is usually neglected, by evaluating the periodontal health of teeth restored by two ion‐releasing materials. The exclusion criteria in the research included pregnancy and lactation, as these variables could affect the periodontal status (Aghazadeh et al. ; Laine ). Although the relation between bruxism and cervical restoration loss is not conclusive, individuals who reported active severe bruxism habits were also excluded to minimize the risk of restoration loss during the follow‐ups. The restorations' retention rate for the CN + UA and RM‐GIC groups was 100%, whereas 76% of restorations were present in the CN + RG group after 9 months. However, discussion of these outcomes is beyond the scope of this study. The statistical analysis for plaque accumulation showed no significant difference between the study groups during any of the follow‐ups. This finding is consistent with those of a previous study comparing RM‐GIC and micro‐filled RC in subgingival restorations (Santos et al. ). These findings are also aligned with the results of two studies on NCCLs (Carvalho et al. ; Shinohara et al. ), which suggest no clinically detectable difference between different restorative materials in the short term. An increase in the mean PI values was noted during the 3‐month follow‐up and these values remained higher than those before the intervention during both the 6‐ and 9‐month follow‐ups, without a significant difference. Although the presence of restorations is a factor in increasing plaque accumulation and, as a result, the development of periodontal disease, there is not enough evidence to show this relationship (Miller et al. ). However, a previous study evaluated the plaque accumulation on anterior RC restorations at a follow‐up of 5–6 years; this retrospective study found that the presence of restorations had a negative effect on the gingival tissue, as their presence led to an increase in plaque accumulation (Peumans et al. ). The disagreement between the present study and the previous study may be due to the difference in the type of restorative material used; an in vitro study showed that the surface roughness of Cention N was the lowest compared with 6 other restorative materials, including RC (Kaptan, Oznurhan, and Candan ), which may cause less plaque accumulation. The difference in the follow‐up duration could be another reason for disagreement. This study found no significant difference in PD between the groups during all follow‐up periods. This indicates that the type of restoration does not affect the PD. This result is consistent with that of two previous studies that have also compared the PD between RM‐GIC and RC after the placement of NCCLs restorations (Carvalho et al. ; Lucchesi et al. ). However, the current study found an increase in PD between the period before the placement of the restorations and after a week; this increase in the means of PD remained almost constant during the follow‐up of 3, 6, and 9 months, and was significant in the CN + RG and CN + UA groups. In contrast, the PD did not increase in two previous studies after the placement of the restorations (Carvalho et al. ; Paolantonio et al. ). This inconsistency between the two studies and this study might be explained by the difference in the isolation method used, as this study used rubber dam isolation. whereas the two studies used only cotton rolls; the application of the gingival retraction cord and the rubber dam clamps could have had an adverse effect on the gingival tissue in terms of the PD. Unfortunately, so far, no study has evaluated the PD for Class V restorations applied with rubber dam isolation, and thus this conclusion cannot be confirmed or refuted. However, a clinical study has indicated that scaling of pockets less than 4 mm in depth will result in loss of attachment (Cortellini and Tonetti ). As both the application of the retentive cord and the scaling procedure lead to physical pressure and trauma on the periodontal tissue, this study may partly explain the loss of attachment occurring in our study. Regarding the placement of the retraction cord, the epithelial attachment withstands forces less than 1 N/mm 2 and breaks when a force of 2.5 N/mm 2 is applied, whereas the application of a gingival retraction cord requires an application force of approximately 2.5 N/mm 2 (Phatale et al. ). Additionally, a meta‐analysis reported an increase in PD when applying the gingival retraction cord compared with cordless retraction methods, which may also partly explain the increased PD in our study (Wang et al. ). In conclusion, the application of the gingival retraction cord with a rubber dam could explain the loss of attachment. Nonetheless, the average loss value in this study was 0.4 mm; thus, the average PD increased from 1.5 to 1.9 mm, which is still within the normal range of PD. This study found no significant difference between the study groups during any follow‐up period regarding BOP, indicating no clinically detectable effect of the restoration's material type (CN or RM‐GIC) on the gingival inflammatory response. To the best of our knowledge, there are no studies available on gingival inflammation or gingival health when restoring NCCLs with Cention N; however, one study compared gingival inflammation when restoring pulpotomized primary teeth with Cention N or stainless‐steel crowns and found that the gingival tissue adjacent to teeth restored with Cention N showed a better gingival response (Kaur et al. ). These results are in agreement with those of the present study. However, this study found an increase in the percentage of bleeding sites between the period before the placement of restorations and 1 week after, in the three study groups, with this variance achieving statistical significance in the CN + UA and RM‐GIC groups. Nevertheless, during the 3‐month follow‐up, this percentage decreased to values similar to those before the intervention and remained around these pre‐intervention values during the 6‐ and 9‐month follow‐ups. This can be attributed to the adverse effect of applying the gingival retraction cord and rubber dam clamp on the gingival tissue and the disappearance of this effect in terms of gingival bleeding after a period of less than 3 months. To determine which of the two factors, applying gingival cord or applying a rubber dam, had a greater effect on increasing gingival bleeding at the follow‐up of 1 week, the results of this study were compared with those of a study that applied only a gingival cord without the rubber dam for the purpose of taking an impression. In the latter study, the number of bleeding sites did not increase after a day or 10 days, indicating that the rubber dam's clamp could have played a greater role in increasing bleeding sites at the 1‐week follow‐up in the present study (Sarmento et al. ). Another study also reported the negative effect of applying a rubber dam's clamp on gingival recession, which may be a result of gingival trauma and inflammation during the period after the restorative procedures (Favetti et al. ). Of particular interest is the fact that the BOP was not affected by the presence of a restoration, despite a noted increase in plaque accumulation after the restorations' application. This could be attributable to the effect of the type of restoration applied on the compensation of the plaque. Evidence from a clinical investigation demonstrated a reduction in the pathogenic count within the bacterial plaque in teeth restored with RM‐GIC (Santos et al. ). This reduction may be due to the effect of the release of different ions from RM‐GIC, as the released fluoride interferes with the process of initial adhesion of bacteria to the surface of the restoration and inhibits the metabolism and growth of bacteria (van Dijken, Persson, and Sjöström ; van Dijken and Sjöström ). As for the two Cention N groups, Cention N incorporates silanized fillers, which are highly reactive, particularly in acidic environments (Tiskaya et al. ); this could have a similar antibacterial effect as RM‐GIC. Additionally, the effect of this material on Streptococcus mutans and the increase in the pH of the bacterial plaque may have an impact on the composition of the bacterial plaque and its ability to cause gingivitis (Aparajitha et al. ; Daabash et al. ; Feiz et al. ; Wiriyasatiankun, Sakoolnamarka, and Thanyasrisung ). Apparently, there could have been an important effect of the ion‐release properties of the material on plaque composition. What further supports this point are the results of two microbiological investigations conducted by Paolantonio et al. and Santos et al. that found an adverse impact of RC on the amount and count of subgingival bacterial plaque compared with GIC and amalgam (Paolantonio et al. ; Santos et al. ). Additionally, one cross‐sectional study on teeth restored with traditional nonion‐releasing RC found a significant difference between unrestored and restored NCCLs (Gurgel et al. ). Nevertheless, two clinical studies contradict this conclusion. The first one compared RC and RM‐GIC when repairing NCCLs and noted no significant difference between the two groups at the 3‐ and 6‐month follow‐ups (Carvalho et al. ). The second one is the clinical aspect of the study by Santos, V.R., et al, which found that there was no significant difference between RC and RM‐GIC restorations accompanied by a coronally positioned flap procedure in terms of BOP at a follow‐up of 6 months (Santos et al. ). This contradiction between these findings and those in the present study could be due to the small sample size in both studies (18 patients per study) and the strict oral care instructions followed when performing flaps in the second study. As the results of the study show, Cention N and RM‐GIC restorations are treatment options that have a gentle effect on the gingival tissue when restoring NCCLs. However, treatment of shallow NCCLs that do not involve interdental bone loss with connective graft should be considered, as this option yields a better gingival response than restorative treatment after 3 months with regard to periodontal variables in the present study (Leybovich et al. ). This study did not evaluate the subgingival microbial biofilm; thus, future studies are needed to evaluate the effect of different ion‐releasing materials on gingival health in comparison with nonion‐releasing materials, especially in the long term. Some gingival‐related parameters were not evaluated in this study, such as the width of keratinized tissue and the clinical attachment level. The evaluation of these variables and the subgingival extension of the restorations in future studies will provide a more holistic picture of the effect of Cention N and other materials on gingival health.
Conclusions Within the limitations of this study, it can be concluded that: The clinical periodontal‐related performance of Cention N is comparable to that of RM‐GIC. The presence of these ion‐releasing restorations had no clinically detectable effect on gingival inflammation. Nonetheless, it is noteworthy that the application of both the retraction cord and the rubber dam clamp may play a role in precipitating attachment loss.
Khattab Mustafa: study conception and design, visualization, methodology, project administration, analysis and interpretation of results, resources, original draft preparation, writing–review and editing. Ghaith Alfakhry: writing–review and editing. Hussam Milly: supervision, visualization, writing–review and editing.
Ethical approval was obtained from Damascus University no./2777/2021. Written informed consent was obtained from all participants before starting the restorative procedures.
The authors declare no conflicts of interest.
Supporting information.
|
Two Outbreaks of Legionnaires Disease Associated with Outdoor Hot Tubs for Private Use — Two Cruise Ships, November 2022–July 2024 | eac2ac8e-cc55-4202-afc1-8d7cb0adc3e5 | 11500841 | Microbiology[mh] | Legionnaires disease is a serious pneumonia caused by Legionella bacteria. Hot tubs can be a source of Legionella growth and transmission when they are inadequately maintained and operated.
Epidemiologic, environmental, and laboratory evidence suggests that private balcony hot tubs were the likely source of exposure in two outbreaks of Legionnaires disease among cruise ship passengers. These devices are subject to less stringent operating requirements than are public hot tubs, and operating protocols were insufficient to prevent Legionella growth.
It is important for cruise ship operators to inventory hot tub–style devices across their fleets, evaluate the design features that increase the risk for Legionella growth and transmission, and test for Legionella .
Legionnaires disease is a serious pneumonia caused by Legionella bacteria. During November 2022–June 2024, CDC was notified of 12 cases of Legionnaires disease among travelers on two cruise ships; eight on cruise ship A and four on cruise ship B. CDC, in collaboration with the cruise lines, initiated investigations to ascertain the potential sources of on-board exposure after notification of the second potentially associated case for each ship. Epidemiologic data collected from patient interviews and environmental assessment and sampling results identified private hot tubs on selected cabin balconies as the most likely exposure source. To minimize Legionella growth, both cruise lines modified the operation and maintenance of these devices by removing the heating elements, draining water between uses, and increasing the frequency of hyperchlorination and cleaning. Hot tubs offer favorable conditions for Legionella growth and transmission when maintained and operated inadequately, regardless of location. Private hot tubs on cruise ships are not subject to the same maintenance requirements as are public hot tubs in common areas. Given the range of hot tub–type devices offered as amenities across the cruise industry, to reduce risk for Legionella growth and transmission, it is important for cruise ship water management program staff members to inventory and assess private balcony hot tubs and adapt public hot tub maintenance and operations protocols for use on private outdoor hot tubs.
Cruise Ship A Outbreak (November 2022–April 2024) During December 2022–May 2023, CDC was notified of five Legionnaires disease (LD) cases among patients (patients 1–5) who had traveled on cruise ship A during the 14-day exposure period . All five cases (four laboratory-confirmed and one probable) were among passengers traveling on the same voyage in November 2022 (Supplementary Table; https://stacks.cdc.gov/view/cdc/165771 ) . During August–September 2023, two additional laboratory-confirmed cases with travel on different cruise ship A voyages were reported to CDC (patients 6 and 7). In April 2024, an additional laboratory-confirmed case was identified in a guest who traveled on cruise ship A the previous month (patient 8). No lower respiratory specimens were available; six patients were hospitalized, and no patients died. Local health departments interviewed patients to identify potential exposures on and off the ship, including hotel stays, health care visits, or other activities . Patients 6 and 7 reported staying in cabins with a hot tub located on the private balcony . In response to notification of the second case in February 2023, CDC reviewed the vessel’s Legionella environmental sampling results from the preceding 6 months and water management program records. A total of 150 water samples were tested for Legionella during August 2022–February 2023 as part of the cruise line’s routine water management program validation. A single non- pneumophila Legionella detection was identified in the potable water system during that time (August 2022); after localized hyperchlorination of the water system, Legionella was not detected. All potable water parameters were within control limits and monitored according to CDC requirements . Review of operation and maintenance records for public hot tubs in common areas indicated that CDC requirements had been met . In March 2023, in response to the outbreak, the cruise line collected 260 1-L water samples from representative points of use, cabins of infected patients, heat exchangers, potable water tanks, decorative fountains, and public hot tubs in common areas. No Legionella was detected. The cruise line also conducted ship-wide hyperchlorination after sampling. An additional 76 potable and recreational water samples were collected during spring and summer 2023; no Legionella was detected. In August 2023, upon identification of the case in patient 6, in which private balcony hot tub use was first reported, CDC requested all 10 private balcony hot tubs on the ship be closed and sampled because they had not been tested previously. L. pneumophila serogroup 2–14 (Lp2–14) and non- pneumophila Legionella species were detected in six of 10 hot tubs. Of the six private balcony hot tubs with Legionella detections, four had concentrations of Legionella >100 colony-forming units (CFU)/mL, and two had concentrations >1,000 CFU/mL. The hot tubs remained closed until their operation and maintenance protocols were modified and nondetectable Legionella sampling results were obtained. Legionella was not detected in environmental sampling of the potable water system or any recreational water features, including the balcony hot tubs, after the change in operation and maintenance protocols. During March 2024, when patient 8 traveled on ship A, to August 2024, approximately 300 samples were collected, and no Legionella was detected. Cruise Ship B Outbreak (January–June 2024) During February–July 2024, CDC was notified of four confirmed LD cases in patients who traveled on cruise ship B during their exposure period (patients 9–12) . Two of the cases occurred in passengers traveling on the same voyage in January 2024 (patients 9 and 10); one of the passengers traveled on two consecutive voyages. The voyages of patients 11 and 12 were in February and May, respectively. Three patients received a positive Legionella urinary antigen test result, and one received a positive culture test result in which L. pneumophila was detected; four patients were hospitalized, and no patients died. In response to the outbreak, CDC requested immediate closure of all hot tubs on the ship, including those in common areas and private balconies, and sampling of all hot tubs and representative potable water locations. L. pneumophila serogroup 1 (Lp1) and Lp2–14 species were detected in all eight private balcony hot tubs on the ship, and Lp2–14 was detected in a single location in the potable water system. Of the testing performed on the eight private balcony hot tubs, two samples had Lp1 concentrations >10 CFU/mL. All balcony hot tubs remained closed until each had nondetectable Legionella postremediation sampling results. As the cruise line implemented changes to the operation and maintenance of the balcony hot tubs, Lp1 and Lp2–14 continued to be detected in two of the eight hot tubs, prompting additional remediation efforts and further refinement of operational and maintenance protocols. This activity was reviewed by CDC, deemed not research, and was conducted consistent with applicable federal law and CDC policy.
During December 2022–May 2023, CDC was notified of five Legionnaires disease (LD) cases among patients (patients 1–5) who had traveled on cruise ship A during the 14-day exposure period . All five cases (four laboratory-confirmed and one probable) were among passengers traveling on the same voyage in November 2022 (Supplementary Table; https://stacks.cdc.gov/view/cdc/165771 ) . During August–September 2023, two additional laboratory-confirmed cases with travel on different cruise ship A voyages were reported to CDC (patients 6 and 7). In April 2024, an additional laboratory-confirmed case was identified in a guest who traveled on cruise ship A the previous month (patient 8). No lower respiratory specimens were available; six patients were hospitalized, and no patients died. Local health departments interviewed patients to identify potential exposures on and off the ship, including hotel stays, health care visits, or other activities . Patients 6 and 7 reported staying in cabins with a hot tub located on the private balcony . In response to notification of the second case in February 2023, CDC reviewed the vessel’s Legionella environmental sampling results from the preceding 6 months and water management program records. A total of 150 water samples were tested for Legionella during August 2022–February 2023 as part of the cruise line’s routine water management program validation. A single non- pneumophila Legionella detection was identified in the potable water system during that time (August 2022); after localized hyperchlorination of the water system, Legionella was not detected. All potable water parameters were within control limits and monitored according to CDC requirements . Review of operation and maintenance records for public hot tubs in common areas indicated that CDC requirements had been met . In March 2023, in response to the outbreak, the cruise line collected 260 1-L water samples from representative points of use, cabins of infected patients, heat exchangers, potable water tanks, decorative fountains, and public hot tubs in common areas. No Legionella was detected. The cruise line also conducted ship-wide hyperchlorination after sampling. An additional 76 potable and recreational water samples were collected during spring and summer 2023; no Legionella was detected. In August 2023, upon identification of the case in patient 6, in which private balcony hot tub use was first reported, CDC requested all 10 private balcony hot tubs on the ship be closed and sampled because they had not been tested previously. L. pneumophila serogroup 2–14 (Lp2–14) and non- pneumophila Legionella species were detected in six of 10 hot tubs. Of the six private balcony hot tubs with Legionella detections, four had concentrations of Legionella >100 colony-forming units (CFU)/mL, and two had concentrations >1,000 CFU/mL. The hot tubs remained closed until their operation and maintenance protocols were modified and nondetectable Legionella sampling results were obtained. Legionella was not detected in environmental sampling of the potable water system or any recreational water features, including the balcony hot tubs, after the change in operation and maintenance protocols. During March 2024, when patient 8 traveled on ship A, to August 2024, approximately 300 samples were collected, and no Legionella was detected.
During February–July 2024, CDC was notified of four confirmed LD cases in patients who traveled on cruise ship B during their exposure period (patients 9–12) . Two of the cases occurred in passengers traveling on the same voyage in January 2024 (patients 9 and 10); one of the passengers traveled on two consecutive voyages. The voyages of patients 11 and 12 were in February and May, respectively. Three patients received a positive Legionella urinary antigen test result, and one received a positive culture test result in which L. pneumophila was detected; four patients were hospitalized, and no patients died. In response to the outbreak, CDC requested immediate closure of all hot tubs on the ship, including those in common areas and private balconies, and sampling of all hot tubs and representative potable water locations. L. pneumophila serogroup 1 (Lp1) and Lp2–14 species were detected in all eight private balcony hot tubs on the ship, and Lp2–14 was detected in a single location in the potable water system. Of the testing performed on the eight private balcony hot tubs, two samples had Lp1 concentrations >10 CFU/mL. All balcony hot tubs remained closed until each had nondetectable Legionella postremediation sampling results. As the cruise line implemented changes to the operation and maintenance of the balcony hot tubs, Lp1 and Lp2–14 continued to be detected in two of the eight hot tubs, prompting additional remediation efforts and further refinement of operational and maintenance protocols. This activity was reviewed by CDC, deemed not research, and was conducted consistent with applicable federal law and CDC policy.
CDC published two Epidemic Information Exchange (Epi-X) calls for cases and notified the European Centre for Disease Prevention and Control to identify other cruise-associated patients with LD because both ships included itineraries in Europe. Cruise operators of both ships notified guests and crew of the potential for Legionella exposure while the investigations were ongoing. CDC reviewed illness logs from both ship clinics. CDC also notified cruise operators of the risk for Legionella growth associated with private balcony hot tubs during regularly scheduled calls with industry partners in December 2023 and June 2024. Both cruise lines ultimately modified the operation and maintenance of the private hot tubs so that heating elements were removed; tubs were only filled upon guest request, drained between uses, and cleaned and disinfected more frequently. Ship A devices were additionally modified to remove filtration elements. Sampling is ongoing for both vessels.
Travel on cruise ships is a recognized risk factor for LD . CDC defines a cruise-associated outbreak as the occurrence of two cases in patients who had traveled on the same ship with voyages within 1 year of each other . In these investigations, both outbreaks involved patients with overlapping voyages, most notably ship A with five patients who traveled on the November 2022 voyage. The outbreak on cruise ship A is the largest cruise-associated LD outbreak investigated by CDC since 2008. On ship A, the private balcony hot tubs were identified as a potential source of exposure after interviews with patients 6 and 7. These devices were found to be operating for months in a manner conducive to Legionella growth, which included maintaining a water temperature in the Legionella growth range (77°F–113°F [25°C–45°C]) for multiple days without draining and operating with no residual disinfectant. In addition, some of these devices were located on decks only one floor above or below common outdoor amenities; previous investigations have shown that hot tubs located in private areas can disseminate aerosols to common areas and result in exposures, even in persons who do not use the hot tubs themselves . Environmental testing revealed extensive Legionella colonization. Subsequent identification of Legionella in private balcony hot tubs operating on ship B strengthened the case that these devices were the likely exposure source. According to current CDC requirements, private hot tubs are not required to have automated continuous disinfectant dosing and monitoring or pH monitoring, as is standard for public hot tubs. To meet CDC requirements, private hot tubs must only be shock-chlorinated, drained, and refilled weekly or between occupancies, whichever is sooner . Although the cruise lines adhered to current CDC requirements for operating and maintaining private hot tubs on ships A and B, these measures were insufficient to prevent Legionella growth. Limitations The findings in this report are subject to at least three limitations. First, clinical isolates were not available for comparisons to determine genetic relatedness. Second, although clinical tests indicated patients were infected with Lp1, environmental testing detected other Legionella species and serogroups in the balcony hot tubs of ship A. However, the presence of any Legionella species indicates that conditions supporting growth existed in these devices. Finally, multiple patients reported other possible exposure locations during their travel, such as hotels and shoreside excursions at ports of call, although the cruise ships were the only common exposure among the infected patients. Implications for Public Health Practice This report describes a previously unidentified source of Legionella exposure on cruise ships: hot tubs located on private cabin balconies, which have become more common as new ships enter service and older ones are renovated. A wide range of hot tub–style devices are used by cruise, hotel, and recreational water industries, including public hot tubs, jetted bathtubs, and hydrotherapy pools. Cruise lines and the hospitality industry should be aware of hot tub features that increase the risk for Legionella growth and transmission, including outdoor use, retention of water between uses, and the presence of recirculation, filtration, or heating systems. Private outdoor hot tubs, as described in this report, are not unique to cruise ships A and B. Inventory of hot tub–style devices by cruise ship operators to ensure that they are included in the vessel’s water management program and are routinely tested for the presence of Legionella could help prevent cruise ship outbreaks of LD. Adapting public hot tub maintenance and operations protocols for use on private outdoor hot tubs can reduce the risk for Legionella growth and transmission.
The findings in this report are subject to at least three limitations. First, clinical isolates were not available for comparisons to determine genetic relatedness. Second, although clinical tests indicated patients were infected with Lp1, environmental testing detected other Legionella species and serogroups in the balcony hot tubs of ship A. However, the presence of any Legionella species indicates that conditions supporting growth existed in these devices. Finally, multiple patients reported other possible exposure locations during their travel, such as hotels and shoreside excursions at ports of call, although the cruise ships were the only common exposure among the infected patients.
This report describes a previously unidentified source of Legionella exposure on cruise ships: hot tubs located on private cabin balconies, which have become more common as new ships enter service and older ones are renovated. A wide range of hot tub–style devices are used by cruise, hotel, and recreational water industries, including public hot tubs, jetted bathtubs, and hydrotherapy pools. Cruise lines and the hospitality industry should be aware of hot tub features that increase the risk for Legionella growth and transmission, including outdoor use, retention of water between uses, and the presence of recirculation, filtration, or heating systems. Private outdoor hot tubs, as described in this report, are not unique to cruise ships A and B. Inventory of hot tub–style devices by cruise ship operators to ensure that they are included in the vessel’s water management program and are routinely tested for the presence of Legionella could help prevent cruise ship outbreaks of LD. Adapting public hot tub maintenance and operations protocols for use on private outdoor hot tubs can reduce the risk for Legionella growth and transmission.
|
Microvascular injury and hypoxic damage: emerging neuropathological signatures in COVID-19 | 12cef33f-8f91-442c-9984-e67c48accdee | 7340758 | Pathology[mh] | Below is the link to the electronic supplementary material. Supplementary file1 (PPTX 32177 kb) Supplementary file2 (DOCX 35 kb) |
Panic disorder respiratory subtype: psychopathology and challenge tests – an update | a9a4bbd7-e46c-47bb-a1b9-8c9e67a9c61b | 7430397 | Physiology[mh] | Patients with panic disorder (PD) experience recurrent panic attacks (PA), which are characterized by sudden, unexpected episodes of intense fear and/or discomfort. According to the DSM-5 definition, a PA is characterized by at least four of 13 possible signs or symptoms. These include somatic, physical, and cognitive aspects, such as palpitations, sweating, trembling, shortness of breath, choking, chest pain or discomfort, nausea, dizziness, chills or hot flashes, paresthesia or numbness, depersonalization (feeling detached from oneself)/derealization, fear of losing control/going crazy, or fear of dying. Besides acute PAs, anticipatory anxiety and avoidance behavior are also frequent manifestations of PD. Therefore, the clinical presentation of PD can be very heterogeneous, which hinders disease management and compromises research outcomes. In an attempt to address this heterogeneity, distinct clusters of PD have been proposed on the basis of the predominant signs and symptoms,: 1) respiratory; 2) nocturnal; 3) nonfearful; 4) cognitive; and 5) vestibular. After many efforts to identify PD subtypes, respiratory symptoms seem to be the best markers to classify PD patients into clusters. - The link between PD and the respiratory system has been explored in several studies. - Respiratory abnormalities are common in patients with PD. - Resting subjects with PD present high minute ventilation, low CO 2 concentration in expired air, and an irregular breathing pattern. These abnormalities of respiratory function are considered a vulnerability factor for PAs and seem to be specific to PD; they are not present in other anxiety disorders, such as social phobia and generalized anxiety disorder. , Psychophysiological responses can also confirm the link between PD and respiration when patients are subjected to respiratory challenge tests. Inhalation of elevated CO 2 concentrations, voluntary hyperventilation, and other methods to trigger acid-base disturbances, such as sodium lactate infusion, can induce similar panicogenic symptoms in some patients with PD. - Moreover, a bidirectional relationship between PD and pulmonary disorders – particularly chronic obstructive pulmonary disease and asthma – has been observed, reinforcing this link. - Briggs et al. suggested two subgroups of PD, the respiratory (RS) and nonrespiratory (NRS) subtypes, based on the presence or absence of respiratory symptoms. Criteria for the RS require the presence of at least four of five respiratory-related symptoms: breathlessness, chest pain, choking, fear of dying, and paresthesia. (Hyperventilation episodes reduce CO 2 levels in the blood, leading to respiratory alkalosis and culminating in paresthesia or numbness.) Briggs et al. also identified differences in response to pharmacotherapy between the RS and non-RS subgroups. The respiratory cluster can be a valid means of distinguishing a PD subgroup with a specific clinical course and distinct response to treatment and challenge tests. - Moreover, the use of categories can help guide clinical assessment and therapeutic approaches, as well as provide optimal methodological strategies for research. However, whether the RS can be recognized as a distinct subgroup of PD with a well-defined phenotype remains controversial. Our group has summarized psychopathology-related findings and other aspects to characterize this subtype elsewhere. , , In this context, the objective of this review rests on the contribution of recent findings about the respiratory PD subtype and focuses on its validity in clinical practice and research, considering both clinical phenotype (signs and symptoms) and biological profile (CO 2 sensitivity).
Clusters associated with respiratory symptoms have characterized more than 50% of the overall sample in several studies of PD. In one group of 193 PD patients, 56.5% (n=109) were classified as having RS according to Briggs et al.’s criteria. , In another sample of 124 subjects, 63.7% (n=79) met the same RS criteria. PD was diagnosed in 431 subjects in a U.S. data survey of the general population (n=8,098). The presence of dyspnea during PAs discriminated a subtype that displayed increased odds of other panic symptoms associated with breathing, such as choking, chest pain, dizziness, and fear of dying, which accounted for 50.1% (n=216) of cases. In a sample of 8,796 individuals from six European countries, 2,257 were found to experience PAs. Participants were classified as having respiratory or nonrespiratory PA depending on whether PA was associated with shortness of breath. Among subjects with PA, the respiratory group represented 70% of cases, and was associated with increased health services utilization. The lifetime prevalence of respiratory PAs was 6.77% (3.14% in the nonrespiratory group), while the 12-month prevalence was 2.26% (1% in the nonrespiratory group). Roberson-Nay & Kendler described two distinct classes of PD: class 1, represented by subjects with respiratory-dominant symptoms, and class 2, comprising individuals with more somatic symptoms and few respiratory signs. Using a different exploratory analysis approach and distinct datasets, approximately 56% of subjects (n= 2,390) were found to belong to class 1.
The existence of PD subtypes was first suggested by Klein, who, based on the “suffocation false alarm theory,” proposed a subgroup of PD patients experiencing mainly respiratory signs and symptoms. Briggs et al. subsequently pioneered the evidence-based discrimination of PD subgroups, as described above. According to the neuroanatomical hypothesis of Gorman et al., PAs originate from a dysfunction in the fear network of the brain, that integrates various structures of the brainstem, amygdala, medial hypothalamus, and cortical regions. The serotoninergic (5-HT) system is well positioned to influence these areas, with neuronal cell bodies in the brainstem raphe nuclei and widespread axonal projections to the forebrain regions. In patients with symptomatic PD, studies have demonstrated decreases in midbrain 5-HTT and 5-HT1A receptor binding. This could reflect a compensatory process attempting to increase 5-HT neurotransmission, particularly in the dorsal periaqueductal gray-amygdala pathway, in order to inhibit hyperactivity or spontaneous neuronal discharge in this region. In addition, patients with PD have dysfunction of the GABAA receptors and/or altered brain GABA concentrations. Accordingly, PD has been treated primarily with drugs that have anxiolytic properties, including benzodiazepines, which increase the potency of GABA by modulating the function of GABAA receptors, and selective serotonin reuptake inhibitors (SSRIs), which increase synaptic availability of 5-HT by blocking its transport into neurons. Interestingly, patients with the RS experience a greater number of spontaneous PAs and respond better to antidepressants, whereas those with the NRS experience more situational PAs and respond more efficaciously to benzodiazepines. Since the first description of the RS in 1993, other different approaches have sought to identify PD clusters. Cox et al. identified a three-factor structure based on 23 signs and symptoms described in the DSM-III and in the Panic Attack Questionnaire: cluster 1 would correspond to dizziness-related symptoms, such as paresthesia; cluster 2 would represent the cardiorespiratory distress subgroup, who mainly experience tachycardia, dyspnea, choking, chest pain, and fear of dying; and cluster 3 would be associated with cognitive factors (fear of going crazy or fear of losing control). Using a similar analytical method, but a set of 13 PD signs and symptoms, a sample of 330 PD patients from six different countries was assessed. Subjects reporting four or more of these signs and symptoms (mainly fear of dying, chest pain/discomfort, dyspnea, numbness, and choking; n=163) tended to develop spontaneous PAs more frequently than those patients with fewer symptoms. In a Japanese sample (n=207), 15 clinical signs and symptoms (13 main symptoms including agoraphobia and anticipatory anxiety) were evaluated as present or absent. A principal component factor analysis revealed three clusters: cluster A comprised dyspnea, sweating, choking, nausea, and flushes/chills; cluster B included dizziness, palpitations, trembling or shaking, depersonalization, agoraphobia, and anticipatory anxiety; and cluster C encompassed paresthesia, chest pain, fear of dying, and fear of going crazy. Rees et al. performed a principal component analysis based on 11 symptoms, which were rated by a sample of 153 PD patients on a scale of 0 to 4 (not present, mild, moderate, severe, and very severe). The analysis detected five clusters: 1) shortness of breath and choking sensations, which seem to represent respiratory difficulty; 2) dizziness and depersonalization; 3) nausea, sweating, and flushing; 4) two groups of cardiovascular signs and symptoms, palpitations, and trembling; and 5) chest pain and numbness. According to this analysis, the component that explained the greatest proportion of variance among clusters was the class of respiratory symptoms (shortness of breath and choking sensation). Segui et al. also found three clusters, which they termed cardiorespiratory, vestibular, and general arousal. The cardiorespiratory cluster, which included the signs and symptoms palpitations, fear of dying, chest pain, paresthesia, trembling, and dyspnea, was the most representative one (26.1% variance). In two other studies, , the symptoms of dyspnea and choking were grouped together in a respiratory cluster. However, in both studies, this subtype accounted for a lower percentage of variance than the other clusters. , In an exploratory analysis factor with 343 PD patients, each of the 13 symptoms that can occur during a PA was rated on a qualitative scale of 0 to 8 (absent to very severe). Based on the scores of these symptoms, three subtypes could be discriminated: cardiorespiratory, autonomic/somatic, and cognitive (18.8, 6.4, and 3.8% of variance, respectively). The symptoms most strongly associated with the cardiorespiratory subtype were palpitation, shortness of breath, choking, chest pain, fear of dying, and numbness. The predominant signs and symptoms in the autonomic/somatic variety were sweating, trembling, nausea, chills/hot flushes, and dizziness. Finally, the cognitive type reported feelings of unreality, fear of going crazy, and fear of losing control. Two studies evaluated possible subgroups in Turkish patients with PD. , Sarp et al. found that three factors – respiratory-circulatory, cognitive, and autonomic – explained 34.3, 16.5, and 10.8% of total variance, respectively. In 159 PD subjects, Konkan et al. found evidence for a five-factor model, distributed across autonomic (15% variance explained), vestibular (9.38%), cardiovascular (8.89%), pseudoneurologic (7.95%), respiratory (7.5%), and fear-of-dying (7.1%) signs and symptoms. As described in , the number of symptoms considered and the rating method employed in the analysis might explain the differences among these studies. Roberson-Nay screened subjects from four epidemiological datasets and one clinical trial (total = 4,268 PD subjects). Each database was examined separately, according to different statistical approaches. Four databases fit better into a two-cluster model (cluster 1 corresponding to major respiratory signs and symptoms such as dyspnea, chest pain, choking, paresthesia, and fear of dying). One database revealed three distinct clusters (high respiratory and somatic symptoms, milder respiratory symptoms, and low respiratory and high somatic symptoms) . The same authors compared several external validators (temporal stability, psychiatric comorbidity, and treatment response) between the RS and NRS, classified according to their own criteria. They found a higher prevalence of major depression and other anxiety disorders in patients with the RS, as well as a higher utilization of pharmacological and psychological treatment than in NRS subjects. PD clusters were explored in a recent study which employed anxiety markers based on Beck Anxiety Index (BAI) scores. A sample of 658 PD patients was divided into three classes: cognitive-autonomic subtype (n=196, 29.8%), with predominance of cognitive symptoms; autonomic subtype (n=197, 29.9%), with milder respiratory and cognitive signs; and a specific subtype, characterized by mild autonomic signs and absence of clear dimensions. For the autonomic class, the authors considered feeling of choking and difficult to breathing as respiratory symptoms and feeling hot, nausea, and flushes as autonomic symptoms. All anxiety markers were highest in the cognitive-autonomic subtype, with dyspnea, feeling of choking, and fear of dying as the predominant symptoms. In summary, there is a trend to recognize respiration-related signs and symptoms as good markers to discriminate among distinct subtypes of PD. In this context, the assessment of PA signs and symptoms could be very useful to identify subgroups and, consequently, allow more accurate data analyses and better interpretation of results. Special care must be taken to identify analysis-linked putative biases, such as number and type of symptoms, and the best method to rank them. Taken together, these findings are indicative a respiratory subtype group represented by diverse cardiorespiratory manifestations. In the face of these controversies, Drenckhan et al., in a differential analytical approach, divided physical and psychological PA symptoms to discriminate a “pure” respiratory cluster, resulting in separate dimensions of cardiac, respiratory, and vestibular/mixed somatic factor. Shortness of breath and choking were the main symptoms representing the respiratory factor. Indeed, these symptoms were included in the respiratory cluster in all studies, , , - except that of Segui et al. shows the main findings related to the aforementioned studies, while lists the sign-and-symptom profile of the cluster most representative of respiratory-related symptoms.
Controversy remains regarding the expression of distinct clinical features between respiratory-related and nonrespiratory clusters. Freire et al. and Song et al. found a lower age of onset among RS compared to NRS patients (27.0±7.9 vs. 31.1±9.1 years, p = 0.016 and 35.4±10.5 vs. 41.5±9.1 years, p = 0.04, in Freire et al. and Song et al., respectively). However, no differences were observed in other studies. , - Biber & Alkin found a longer duration of disease in the RS (50.8±60.7 vs. 23.1±23.5 months, p < 0.05), but this outcome was not found by others. , A family history of mental disorders was more prevalent in RS patients in several studies. , , Demographic data, such as gender, age, occupation, education, and marital status, are consistently similar across the two groups. , , - In one study, the presence of comorbidities, such as agoraphobia, major depression, and other anxiety disorders, was higher in RS groups, as was increased utilization of psychological and pharmacological treatments. In another study, the incidence of agoraphobia, fear of respiratory manifestations, and number of PA symptoms were all higher in RS than in NRS patients. However, Panic Disorder Severity Scale (PDSS) scores were similar in both subgroups. Items of specific questionnaires, such as fear of suffocation and fear of other respiratory symptoms, are endorsed more often by patients in the RS than in other PD clusters. RS exhibited higher agoraphobic and panic-like symptoms and increases in Anxiety Sensitivity Index scores than NRS patients, but there was no subtype distinction based on severity scales (PDSS and Panic and Agoraphobia Scale [PAS]). Other studies have provided further contradictory data concerning differences in symptom severity and presence of comorbidities between RS and NRS. Beck et al. reported no differences in the number of anxiety and panic signs and symptoms between the two groups; Biber & Alkin likewise found no difference in depression levels. Conversely, Nardi et al. reported that NRS patients experienced more frequent depressive episodes than RS subjects did. Both subtypes had similar scores on anxiety and severity (PAS) scales. In a Portuguese study, patients with the NRS scored worse on the psychological domain of the WHOQOL quality of life questionnaire. Finally, no relationship between suicidal ideation or suicide attempt and the RS has been confirmed. Several biological markers of PD, such as antioxidant enzymes (glutathione peroxidase and superoxide dismutase), indicators of cellular immunity (adenosine deaminase), biochemical targets (phosphate levels), and genes related to hormone synthesis (namely, the PROGINS variant of the progesterone receptor gene) did not discriminate between RS and NRS. - A recent neuroimaging study identified structural differences between the RS and NRS groups, defined according to the criteria of Briggs et al. RS patients had decreased cortical thickness in the frontotemporal cortex, which might be related to perception of respiratory changes (i.e., dyspnea) and emotional deregulation. In another recent study, the magnitude of cardiorespiratory symptoms influenced the activation of some cortical areas (such as the insula) and brainstem in PD patients exposed to panic-related scenes. Taken together, these findings suggest that specific neural regions could be involved in the RS cluster of PD. In addition to the aforementioned biomarkers, several clinical markers of PD were assessed in a recent review. Structural or functional changes in brain areas, respiratory patterns, and psychophysiological parameters such as heart rate variability could be diagnostic markers of PD. Given the complex and multidimensional nature of the disorder, a combination of different biomarkers and clinical markers (signs and symptoms) could be a reliable strategy to guide better management of PD. Future studies could highlight the utility of simple, low-cost markers, such as heart rate variability and breathing pattern, to discriminate different PD subtypes based on specific symptom clusters.
Respiratory challenge tests could constitute reliable tools to distinguish a putative respiratory cluster of PD. Inhalation of elevated CO 2 concentrations is the basis of the most widely studied such test. Exposure to high CO 2 concentrations reliably triggers fear and PA-like respiratory symptoms in humans and animal models. Indeed, CO 2 hypersensitivity may be a risk factor for panic vulnerability. To test whether patients with the RS are more sensitive to CO 2 inhalation than NRS ones, several studies assessed the prevalence of PA after exposure to a CO 2 challenge test. , , , , All studies used the Briggs et al. criteria to discriminate RS; however, they were studies were heterogeneous in terms of PA definition and type of CO 2 challenge test In one study, RS (n=28) and NRS (n=23) subjects were exposed to a single breath of 35% CO 2 /65% O 2 . A PA was triggered in 79% of RS versus 48% of NRS subjects (p < 0.05). Nardi et al. and Valença et al. employed the double-breath 35% CO 2 inhalation test before and after 2 weeks and observed higher PA rates in RS than in NRS individuals in both tests. Freire et al. also found a higher percentage of PA induction in RS than in NRS subjects (80.3% [n=53] vs. 1.8% [n=6], p < 0.001) after a single exposure to CO 2 . One study found no difference in PA frequency using a distinct CO 2 exposure method (5% CO 2 rebreathing for 5 minutes). However, subjective suffocation, respiratory rate, and voluntary termination of the test were all higher in the RS group. summarizes these findings. Several studies evaluated CO 2 as a potentially sensitive biomarker to identify RS, and found that RS patients are more sensitive to hypercapnia (higher levels of CO 2 in the blood) than those with NRS. - , Using similar methodological designs, these studies divided PD patients into CO 2 responders and CO 2 nonresponders, based on the presence (CO 2 responders) or absence (nonresponders) of PA during the double-breath 35% CO 2 inhalation test. RS subtype was defined according to the Briggs et al. criteria. A higher percentage of RS patients was detected among CO 2 responders than among CO 2 nonresponders. summarizes the findings of studies assessing the magnitude of CO 2 sensitivity in RS patients.
Although CO 2 can induce a PA in most patients with PD, pretreatment with a single dose of a benzodiazepine (such as alprazolam or clonazepam) has been shown to block this effect. , Additionally, treatment with SSRIs and tricyclic antidepressants also reduced the sensitivity to CO 2 in PD patients. RS patients treated with either benzodiazepines or tricyclic antidepressants improved faster than NRS ones. However, in the long run, treatment efficacy was similar in the two groups. , RS patients may respond better to tricyclic antidepressants than to benzodiazepines. Moreover, imipramine, alprazolam, nortriptyline, and clonazepam effectively treat all PD patients. A combination of cognitive-behavioral therapy (CBT) and pharmacotherapy is the first line of treatment for PD. Respiratory exercises emphasizing diaphragmatic breathing are one of the components of CBT, leading to establishment of a regular breathing pattern and reduction of anxiety levels. Thus, considering the presence of common respiratory abnormalities in PD patients, especially in RS, patients in this cluster might derive more benefit from CBT than NRS subjects do. Conversely, some studies have reported no difference between RS and NRS patients under CBT. , Breathing techniques focusing on attenuation of hypocapnia (lower levels of CO 2 in blood) and normalization of respiratory pattern seem to help PD patients. Studies measuring end-tidal partial pressure of CO 2 by capnometry during exhalation have found lower levels of CO 2 in RS than in NRS subjects. , Nevertheless, no studies have assessed the effects of breathing techniques in distinct PD subtypes. Other interventions which include components that can modulate breathing may be helpful. Yoga involves breath control (pranayamas), meditation, and physical postures. The practice of yoga and a combination of yoga and psychotherapy have been found to reduce anxiety and body sensations in PD subjects. Further investigation of breathing and other physiological parameters could help elucidate the potential mechanisms and efficacy of mind-body practices for management of PD symptoms.
Among the various differences in the clinical presentation of PD across subjects, the respiratory subtype can be well characterized by specific symptoms and tendency toward greater responsiveness to respiratory stimulants (CO 2 ). In this context, focus on the RS yields a better understanding of respiratory symptoms and the mechanisms associated with breath control in PD, which considered an important aspect of the pathophysiology of PD and is still poorly understood. The current evidence base on the pathophysiology of PD includes several hypotheses based on neurobiological, behavioral, and cognitive theories. , , , Alterations in the neural circuitry that involves the brainstem and fear network and impairments in the pH chemosensory system may be the main mechanisms involved in the respiratory abnormalities observed in PD patients. - Individuals diagnosed with PD generally have a high perception of danger or threat. To assess a situation as threatening and mount an anxiety-like response, an individual must first detect environmental stimuli through sensory systems and then identify them as aversive or potentially dangerous. The combined actions of distributed neural circuits that emerge from the amygdala, bed nuclei of the stria terminalis, ventral hippocampus, and medial prefrontal cortex result in the interpretation and evaluation of the emotional value of environmental stimuli. If such stimuli are identified as threatening based on this assessment, they may elicit defensive behaviors by recruiting the brainstem and hypothalamic nuclei, resulting in anxious symptoms. The brainstem and its interactions regulate several homeostatic functions, including cardiorespiratory control and chemoreception. PD patients tend to exhibit abnormal brainstem activation in response to emotional stimuli when compared with healthy controls. , Acid-base imbalance is another potential mechanism linking breathing and panic. Both CO 2 and lactate, for instance, elicit spontaneous Pas when administered exogenously, as a result of the activation of pH monitoring networks. CO 2 inhalation leads to respiratory acidosis and lactate causes metabolic alkalosis, generating bicarbonate as a byproduct and stimulating CO 2 production. In humans, CO 2 sensitivity lies on a continuum, with PD subjects being highly sensitive to low CO 2 and healthy volunteers only experiencing panic-like symptoms at higher concentrations. Extracellular pH is a fundamental signal for regulation of homeostatic arousal, with effects on behavior and breathing. Chemoreceptors sensitive to CO 2 /H+ are activated when pH levels decrease. Among these chemoreceptors, acid-sensitive channels, such as acid-sensitive ion channels (ASICs), transient receptor potential (TRP) channels, the vanilloid receptor 1 (TRPV1), and T-cell death-associated gene 8 (TDAG8), are closely related to the expression of fear. Detection of acidosis triggers ventilatory responses, such as hyperventilation and tachypnea. In patients with PD, elicitation of dyspnea and arousal occur, characterizing the fear sensation. Respiratory and behavioral alterations are the main panicogenic symptoms. In this context, lower pH levels can be considered an interoceptive alarm to trigger a PA. Inhalation of CO 2 lowers brain pH levels, and this cerebral acidosis activates acid-sensitive circuits (such as ASIC channels) in the amygdala to produce fear and panic. In short, acidosis sensed by acid channels may be translated into the autonomic, behavioral, and respiratory manifestations of a PA.
The respiratory subtype constitutes a distinct cluster of PD, characterized by specific symptoms and a tendency toward abnormally high CO 2 sensitivity. Studies supported by more specific respiratory symptoms, psychophysiological markers based on cardiorespiratory outcomes, other clinical markers, neuroimaging findings, and respiratory challenges could improve characterization of the respiratory subtype.
The authors report no conflicts of interest.
|
Improving the efficacy of combined radiotherapy and immunotherapy: focusing on the effects of radiosensitivity | 27fb00e6-e4e1-45f3-a20d-a63aa11364ae | 10207801 | Internal Medicine[mh] | Radiotherapy can provide excellent local control of tumor growth by directly inducing single strand breaks (SSBs) and double strand breaks (DSBs) in DNA, as well as apoptosis and necrosis of tumor cells through the formation of reactive oxygen species (ROS) and free radicals, and is an irreplaceable therapeutic tool in cancer treatment . Radiotherapy also has potent immunomodulatory potential by promoting tumor-specific antigen production and enhancing the initiation and activation of cytotoxic T cells, thereby allowing tumor clearance in immune surveillance. In addition, radiotherapy may induce immunogenic cell death through the release of cytokines, inflammatory mediators, and other immune-related molecules. Despite the activated CD8 + T cells and other immunostimulatory cells can migrate and infiltrate to metastatic sites to act as anti-tumor, but the upregulation of immune-suppressed cells by inflammatory factors may inhibit the anti-tumor effects and lead to tumor progression. This suggests that radiotherapy alone is not sufficient to completely eliminate primary and metastatic tumor lesions . Based on the understanding of cytotoxic T lymphocyte-associated antigen 4 (CTLA-4) and the programmed cell death protein 1/ programmed death-ligand 1 (PD-1/PD-L1) and other pathways in tumor immune microenvironment, immune checkpoint inhibitors (ICIs) can enhance the intrinsic immune response against tumor antigens by promoting T cell activation and function, and have been approved for the treatment of a variety of tumors . However, not all patients derive benefit from this treatment and the effective rate of ICIs alone is only 20-30%, with a majority of patients initially developing primary drug resistance or acquiring secondary drug resistance soon after treatment . Augmented immunotherapy involves increased release of tumor antigens, T cell infiltration, and enhanced antigen presentation. Several mechanisms of immune escape have been postulated to explain the failure of tumor immune attacks. A better understanding of these mechanisms will help us to seek therapeutic strategies to overcome immunotherapy resistance . In recent years, the combination of radiotherapy and immunotherapy (CRI) can increase mutual sensitization and enhance antitumor effects, and their synergistic effects have shown survival benefits in multiple studies . To begin with, radiotherapy triggers the release and presentation of tumor-associated antigens (TAAs), which enhance systemic responses by triggering the recruitment of antigen-presenting cells (APCs), such as macrophages, dendritic cells (DCs), and B cells that enhance T-cell infiltration and promote anti-tumor immune responses in the host . Activation of tumor cells by radiotherapy can reshape the tumor microenvironment to reduce immunotherapy resistance, induce antigen release and cross-presentation of DCs, and trigger the recruitment and activation of APCs, which play a key role in the antitumor immune response . Moreover, radiation promotes the release of cytokines and chemokines, which leads to increased production and recruitment of fibroblast growth factor (FGF), transforming growth factor-β (TGF-β), interleukin 1β (IL-1β) and tumor necrosis factor (TNF), which activate Treg cells, bone marrow-derived suppressor cells and cancer-associated fibroblasts . A recent study indicated that radiation-induced DNA DSBs upregulate PD-L1 expression in tumor cells via ATM/ATR/Chk1 kinase, but immunotherapy can prevent the immunosuppressive effects caused by radiotherapy . In addition, dysfunction of the tumor vascular system can lead to an immunosuppressive microenvironment and induce radioresistance, and immunotherapy creates a potential opportunity to reduce tumor hypoxia and improve radiosensitivity. When tumor cells are activated by immunotherapy, activation of CD8 + T cells and production of interferon (IFN)-γ to normalize tumor vasculature can sensitize tumors to radiation therapy through mechanisms that include normalization of the tumor vascular system and tissue hypoxia (Fig. ). Therefore, the CRI to synergistically counteract the innate and adaptive immunity of cancer cells, as well as to bypass immune tolerance and exhaustion is highly prospective clinically. The identification of biomarker-based approaches is central to the development of clinical strategies for CRI, but most of the previous studies on the efficacy of CRI have focused on the dose, timing, efficacy, and sequence of the combination of the two treatments . There is no standardized choice for the sequence of radiotherapy combined with immunology, so the timing used in different studies currently varies. In multiple preclinical and clinical trials, immunology prior to or concurrent with radiotherapy is the superior choice . There is additional evidence to suggest that sequential therapy and the early use of immunotherapy after radiotherapy can increase the clinical benefits, which is beneficial for newly recruited T cells to destroy tumors . Although preclinical works have shown that immunotherapy has a radiosensitizing effect, the window of opportunity for optimizing this synergy is limited as it includes many confounding factors . Therefore, the optimal sequence of radiotherapy and immunotherapy still needs to be explored through large randomized clinical trials. There is now increasing evidence that the intrinsic radiosensitivity in tumor cells also influences the release of cancer cell antigens and affects antigen-specific T cell activation during the radiation-induced cancer immune cycle . As we all know, the most significant radiobiological factors affecting tumor response to radiotherapy are summarized as the “5 Rs”: DNA damage repair, redistribution in cell cycle, repopulation, reoxygenation, and intrinsic radiosensitivity of cancer cells . Among them, the radiosensitivity of tumor cells is the main determinant of tumor response to radiation . Recently, reactivation of the antitumor immune response has been recognized as the “6th R”, which extends the concept of radiosensitivity beyond the tumor cells themselves and supports improved outcomes when radiotherapy is combined with immunotherapy . In this review, we focus on the radiosensitivity of tumor cells to explore its influencing factors, prediction methods and interactions on the immune system. Furthermore, we explore the predictive value of radiosensitivity to CRI efficacy, which is expected to provide new directions for improving the efficacy of CRI. The influence factors of tumor radiosensitivity The intrinsic radiosensitivity of tumor cells is the main determinant of tumor response to radiation, which involves multiple tumor signaling pathways and molecular biological information (Fig. ). The cellular origin and differentiation of tumor tissues are the main factors affecting the radiosensitivity of tumor cells. Tumors originating from radiosensitive tissues are more sensitive to radiation, while poorly differentiated tumors are less sensitive to radiation . The radiosensitivity of an individual depends to a large extent on biological factors related to epigenetic factors, and the epigenetic mechanisms that determine the selection of metabolic patterns also contribute to the individual radiosensitivity and adaptability of an organism. On the one hand, DNA methylation affects the initial damage process, and on the other hand, methylation shift to ab initio type is associated with further development of protective and repair processes . However, the exact underlying genetic factors that contribute to the inter-individual differences in cellular radiosensitivity are unknown. Understanding the cellular and genetic basis of radiosensitivity and identifying individuals with higher or lower radiosensitivity will facilitate population risk assessment, disease prediction, individualized radiotherapy, and the development of radiation protection standards . Moreover, observations of human tumors have revealed a clear relationship between cell proliferation and cell renewal rates and radiosensitivity. Any tumor with rapidly average growth rate and elevated cell renewal rate is also more sensitive to radiation, and the cellular radiosensitivity differs in different periods, so the redistribution of cell cycle phases within the cell population after irradiation can alter the radiosensitivity. In spite of the many factors (i.e., dose, exposure volume, gender, age, underlying disease, and lifestyle) that may influence individual radiosensitivity and radiosensitivity to cancer, the inherent cellular radiosensitivity is genetically determined and supported by genetic alterations involving DNA damage repair . Genetic alterations in proteins involved in DNA damage repair are responsible for individual differences in radiation response. Genetic mutations in DNA repair response-related genes (i.e., p53, ATM, BRCA1, BRCA2, ERCC1, XRCC3 and Rad51) have also been found to be associated with radiosensitivity in lung cancer correlation . For instance, individuals with pure mutations in ATM have an approximately three-fold increase in radiosensitivity at the cellular, tissue and biological levels compared to average . The development of DNA-based markers is currently underway, and areas for additional research include the role of somatic mutations in DNA damage response genes that affect radiosensitivity. Exposure of cells to extracellular matrix proteins can increase radioresistance by promoting DNA damage repair and activation of the Akt/MAPK signaling pathway . It has been demonstrated that the anti-apoptotic protein nucleolin (C23) can enhance radiosensitivity in non-small cell lung cancer (NSCLC) by affecting the activity of DNA-dependent protein kinase (DNA-PK) . There is growing evidence that viral pathogenic factors are associated with the regulation of cellular radiation response, treatment outcome, and clinical prognosis in patients following radiotherapy, with the regulation of DNA damage repair mechanisms being the most common point of attack . Malignancies with a viral etiology are more immunogenic, such as human papillomavirus (HPV), Epstein-Barr virus (EBV) and other virus types that are more sensitive to anticancer therapy. One work identified a group of Head and neck squamous carcinoma (HNSCC) that may benefit from CRI and showed a significantly improved prognosis in patients with HPV-positive tumors, attributed to increased intrinsic radiosensitivity and possibly to the modulation of cytotoxic T-cell responses in the tumor microenvironment . Recent study indicated that for HPV-positive HNSCC, the virus hijacked cellular mechanisms of DNA repair, altered cell cycle distribution, induced cell proliferation and displayed peculiar hypoxic kinetics during radiation treatment . The mechanism described involves a reduced ability to repair DNA double-strand breaks, accompanied by enhanced radiation-induced G2/M cell cycle arrest . Additionally, excessive expression of immune checkpoints was also strongly associated with radiosensitivity. This finding suggested that high PD-1 expression was significantly associated with the clinical prognosis of HPV/p16-positive HNSCC. Patients in the radioresistant group and HPV/p16-negative group with radioresistant genetic markers could benefit from combination CRI . The central research on EBV-regulated radiation response has focused on LMP-1, which is expressed in most EBV-associated malignancies.LMP-1 inhibits DNA double-strand break repair by inhibiting the phosphorylation and activity of DNA-PKcs, a key enzyme of the NHEJ pathway in nasopharyngeal carcinoma (NPC), and by inhibiting ATM repair of DNA double-strand breaks . In addition to the tumor cells themselves, environmental factors such as oxygenation status may also affect radiosensitivity by further modulating damage induction and cellular responses . Therefore, as a classical regulator of tumor radiation resistance, the elimination of hypoxia may be a potential solution to address radioresistance . Hypoxia inducible factor-1 (HIF-1) remains active in cells that survive radiation therapy and is associated with tumor cell resistance to radiotherapy. It has been suggested that it may modulate tumor radioresistance through reprogramming of glucose metabolism and cell cycle regulation . Tumors contain different proportions of intrinsically radioresistant cancer stem cell (CSC), which are closely associated with tumor hypoxia, and HIF-1α contributes to the development and maintenance of the CSC phenotype . The radioresistance of CSC is characterized by a reduced accumulation of radiation-induced DNA damage and increased activation of anti-apoptotic signaling pathways compared to differentiated tumor cells . Current strategies for predicting normal tissue radiosensitivity are genomics and large-scale prospective studies, and further research is still needed to explore the best predictive methods for radiosensitivity . The prediction methods of tumor radiosensitivity The radiosensitivity of tumor cells is strongly influenced by molecular variation at the genomic, transcriptional and translational levels. Radiosensitivity is a measure of the response of cells, tissues or individuals to ionizing radiation and can be used to predict which individuals will benefit from radiotherapy. Recent advances in gene sequencing technology and microarray technology for high-throughput RNA analysis have driven interest in identifying features that measure the intrinsic radiosensitivity of tumor cells. The development of a successful predictive analysis of radiosensitivity has been a major goal of research, and many genetic markers have been developed to predict the radiosensitivity of tumors . These methods can be broadly divided into two categories: one is the characterization of the surviving fraction of cancer cell lines formed after radiation, which reflects the intrinsic radiosensitivity of cancer cells, but fails to consider the influence of non-malignant cells in the tumor microenvironment, particularly the role of anti-tumor immunity . The second is the prediction of patient progression after radiotherapy. This is dedicated to predicting the clinical outcome of radiotherapy, but cannot be used for cellular level studies and it is difficult to reveal radiobiologically based mechanisms . Nevertheless, how to build a radiosensitivity prediction model has not been discussed systematically in these recent years. The traditional experimental approach to determine intrinsic radiosensitivity is the survival of tumor cell lines at a single dose of 2 Gy (SF2), but is not applicable for routine use and alternative strategies must be sought. The radiosensitivity index (RSI) is a 10-gene model based on the survival of 48 human cancer cell lines at SF2 radiation and is a measure of clonogenic survival after a given radiation dose . The 10-gene model (AR, cJun, STAT1, PKC, RelA, cABL, SUMO1, CDK1, HDAC1, and IRF1) that hold a crucial role in DNA damage response, histone deacetylation, cell cycle regulation, apoptosis and proliferation . The RSI prediction model is a linear regression algorithm and is independent of the cancer type. RSI is designed to detect intrinsic tumor radiosensitivity independently of cancer type and has been independently validated as a pan-tissue biomarker of radiosensitivity at multiple disease sites . The 31-genes were developed by analyzing a panel of NCI-60 cancer cells that were associated with SF2 expression, and its correlation with radiosensitivity has been validated in various malignancies . Similarly, measuring the oxygen partial pressure of a tumor can indicate its level of hypoxia, which can help predict its radiosensitivity . Unfortunately, these parameters, even when used in combination, are insufficient to predict tumor radioresistance for clinical use. Since the relationship between radiation dose and survival is nonlinear, various mathematical formulas have been proposed to fit the radiation survival curve. The linear quadratic (LQ) model has become the most popular calculator for analyzing and predicting ionizing radiation response in the laboratory and in the clinic, where the α/β ratio is used to characterize the sensitivity of specific tissue types to segmentation . The LQ model provides a simple equation between cell survival and delivered dose: S = exp (-αD-βD 2 ) . The radiosensitivity of cells is influenced by complex interactions between intrinsic polygenic traits. As the mechanisms and biomarkers of radiosensitivity have become better understood, gene expression classifiers containing few key genes have been used to predict radiosensitivity in specific tumor types or various human cancers . Based on RSI, LQ model, and the time and dose of radiotherapy received by each patient, a team derived a genome-based model for adjusting radiotherapy dose (GARD) on more than 8,000 tumor samples from more than 20 tumor types . The GARD predicts the efficacy of radiotherapy and guides the radiation dose to match the individual tumor radiosensitivity, with higher GARD values associated with better efficacy of radiotherapy. Given that the range of GARD values varies among different types of cancer, the use of RSI alone cannot be a complete representation of the treatment effect, and we need to combine the means of tumor type and genetic testing to determine the appropriate radiotherapy dose for individual patients. Besides the classical biological mechanisms mentioned above, gene sequencing has further revealed the regulatory role of non-coding RNAs on radiosensitivity, and their high-throughput properties contribute to the study of radiosensitivity mechanisms. A previous study used a gene expression classifier to predict radiosensitivity, which regarded radiosensitivity as a continuous variable, used microarray significance analysis for gene selection, and multiple linear regression model for radiosensitivity prediction . Three new genes (RbAp48, RGS19 and R5PIA) were identified in the gene selection step, and their expression values were correlated with radiosensitivity and were transfected with cancer cell lines. The results established that the RbAp48 gene could induce radiosensitivity 1.5-2 times, and increased the proportion of cells in G2-M phase of cell cycle. In addition, the study also showed that the overexpression of RbAp48 was related to the dephosphorylation of Akt, which suggested that RbAp48 may exert its effects by antagonizing the Ras pathway. This study established that radiosensitivity can be predicted based on gene expression profiles and introduced a genomic approach to identify novel molecular markers of radiosensitivity . Moreover, some traditional pathology techniques remain valid for measuring tumor radiosensitivity. For instance, hematoxylin and eosin staining can be used to identify radiosensitive (i.e., seminoma) or radioresistant (i.e., glioma) tumors (Fig. ). More advanced pathologic techniques such as DNA methylationome analysis are now used to classify tumors, but have not yet guided the clinical prescription of radiotherapy doses. The current strategy for predicting normal tissue radiosensitivity is genomics and large-scale prospective studies, and further studies are still needed to explore the best predictive methods for radiosensitivity. The biomarkers of tumor radiosensitivity Unsatisfactory radiosensitivity has been plagued, and finding biomarkers that predict radiosensitivity could help improve the efficacy of radiotherapy. Chromosomal aberrations and DNA damage, in particular DSB, are among the few cellular markers that have some correlation with cellular radiosensitivity. Signaling pathway molecules involved in the DNA damage response are excellent candidates for the evaluation of radiosensitivity biomarkers, and relevant biomarkers include MRE11, AIMP3, NBN, and BRE, with MRE11 potentially a predictive biomarker for radiotherapy benefit . The current research suggests that γ-H2AX assay as a rapid and sensitive biomarker can be used in epidemiological studies to measure changes in radiosensitivity. The use of γ-H2AX lesion analysis as well as DSB repair gene polymorphisms can be used to assess cellular radiosensitivity, which will assist in population risk assessment, disease prediction, individualized radiotherapy, and the development of radiation protection standards . Additionally, evaluation of the predictive significance of the systemic immune-inflammatory index (SII) on overall survival and radiosensitivity in advanced NSCLC showed favorable radiosensitivity in the low SII group, and higher SII levels were associated with poorer overall survival and radiosensitivity . Cellular radiosensitivity can be assessed by quantifying DSB damage and repair . It has been observed that among the different types of DNA damage, DSBs have the slower and most lethal repair dynamics. Therefore, they are more helpful in explaining clinical radiosensitivity than other types of damage with rapid repair dynamics . The development of DNA-based markers is currently underway and areas for further research include the role of somatic mutations in DNA damage response genes that affect radiosensitivity . The molecular mechanisms involved in the radiation-induced response are complex and the expression levels of genes do not consistently represent the properties of all proteins in the tumor cells. The proteomic approach allows the identification of various proteins involved in the cellular response to ionizing radiation, which may be useful in identifying potential candidates for use as predictive biomarkers. The expression levels of genes do inconsistently represent the nature of all proteins in normal or tumor cells, and therefore direct detection of protein expression may be more effective in determining the complexity of the mechanisms and the large number of molecular signatures involved in the cellular radiation-induced responses. The radiosensitivity of tumors is related to the basal expression levels of intracellular or cell membrane proteins, and the direct detection of protein expression using proteomics studies allows the detection of protein sequences and post-translational modifications stored in genes that can be used for early diagnosis, prognosis and treatment of cancer . Current proteomics technologies can be used to detect and analyze proteomic information using cells, tissues or body fluids, providing a better platform for biomarker research and development . High throughput radio proteomics is the latest tool, where mass spectrometry (MS) is used to analyze and identify unknown proteins by converting protein molecules into gas phase ions through an ionisation source and applying the electromagnetic field of the instrument to separate proteins with a specific mass-to-charge ratio. MS has the advantage of quickly analysis, high sensitivity and resolution. The advantages of MS are its speed, sensitivity and resolution. Proteomics research based on liquid chromatography-mass spectrometry (LC-MS) is now widely used . The intrinsic radiosensitivity of NSCLC is mainly regulated by the signal pathways in the proteoglycans, focal adhesion and the actin cytoskeleton in cancer. Radiosensitivity-specific proteins can guide clinical individualized radiotherapy by predicting radiation response in NSCLC patients . The effect of radiosensitivity on the efficacy of CRI In the era of immunotherapy, reliable genomic predictors to identify optimal patient populations in CRI are lacking. A comprehensive analysis of radiosensitivity-associated genes and proteins in lung cancer and other solid tumors has been used to identify potential biological predictors of radiosensitivity . There are some evidences that radiosensitivity could predict the effect of radiotherapy and immunotherapy ( Table ) . To determine first whether tumor radiosensitivity correlates with immune system activation in all tumor types, Tobin et al. identified 10,240 genotypically distinct solid primary tumors using 12 chemokine genes to define intratumor immune activation and determined that low RSI was significantly associated with elevated immune activation, supporting the association of RSI with immune-related signaling networks in patients’ tumors (using an RSI threshold of 0.3745) . In another study, a total of 12,832 primary tumors from 11 major cancer types were analyzed in relation to DNA repair and immune subtypes in order to determine whether genomic scores of radiosensitivity were associated with immune responses. The results found that RSI was related with various immune-related signatures and predicted responses to PD-1 blockade, emphasizing the promising potential of RSI as a candidate biomarker for CRI . In addition, a study also identified enhanced immune checkpoint interactions in radioresistant tumors, providing a new theoretical basis for radiotherapy and ICIs for the treatment of HNSCC . The RSI-low may be characterized by higher genomic instability and subsequently higher mutational burden, which associated with predicted efficacy of dominant IFN-γ signaling responses and PD-1 blockade. Taken together, RSI-Low tumors may represent a special subgroup and therapeutic target for immunotherapy . The molecular mechanisms underlying the biological effects of radiotherapy can affect the response and repair of cells to DSBs, but there is currently limited research on the mechanisms of RSI and immune response .Research has found that RSI is associated with various immune-related genomic and molecular characteristics, and low RSI is correlated with dominant response to IFN-γ signaling and predicted efficacy of PD-1 blocking agents . Lower RSI is linked to higher HRD scores and higher TMB, indicating the presence of defective DNA repair mechanisms and potential for response to immune based therapies . Besides, lower RSI is also correlated with higher RNA stemness score, indicating higher degrees of stemness and tumor de-differentiation, which is also related to increased PD-L1 protein expression .To further explore the relationship between RSI and immune response, a team used whole transcriptomic and matched proteomic data from 12,832 primary and 585 metastatic tumors and found that RSI was associated with a variety of immune-related genomic and molecular features. Lower RSI was associated with higher homologous recombination deficiency (HRD) scores and higher tumor mutational burden (TMB), suggesting the presence of defective DNA repair mechanisms and response potential to immune-based therapies . HRD scores were correlated with genes involved in homologous repair, including BRCA1, BRCA2, RAD51B, and RAD51C, and alterations in these genes were related to radiosensitivity . Intriguingly, the RSI-Low tumors exhibit both higher microsatellite instability (MSI) and TMB molecular profiles in gastric cancer, which were shown to be subgroups with favorable prognosis after immunotherapy . Furthermore, since RSI genes (STAT1 and IRF1) are downstream of IFN-γ-mediated signaling, RSI correlates better with various immune-related molecular features and phenotypes than other genes and genetic features associated with radiation response . At the same time, the role of the immune system is crucial for tumor radiosensitivity. To explore the relationship between intrinsic tumor radiosensitivity and the immune system, a study has investigated radiation-induced tumor equilibrium and dormancy in animal models, and whether host immune responses contribute to radiation-induced tumor equilibrium . The study developed two mouse models—TUBO (HER2-positive breast cancer) and B16 (melanoma), and has observed four possible tumor responses to radiotherapy. These were non-responsive tumors (non-responsive to radiotherapy); responsive tumors (tumor regression observed within 10 days after radiation); stable tumors (tumors that regress and remain stable and palpable during any 34-60-day observation period); late recurrent tumors (tumor recurrence after 60 days). The inherent cellular radiosensitivity of tumors is frequently hypothesized to explain the observed differences in tumor regeneration rates observed after radiotherapy, and this study determined the radiosensitivity of tumor cells taken from mice that responded variably to radiotherapy. These tumors were surgically removed and digested into single cell suspensions and subjected to 2, 5, or 10 Gy of in vitro irradiation and assessed with clonogenic assays. The results demonstrated that tumor cells with different responses to radiation in vivo exhibited indistinguishable radiosensitivity in vitro. This finding revealed that the degree of tumor cells radiosensitivity was unable to explain the different tumor responses to local radiotherapy, in contrast to immune cells and their cytokines, which have been shown to exhibit a pivotal role in inhibiting tumor cell regeneration in two experimental animal model systems. Traditional radiosensitivity studies have focused on tumor cells, neglecting the effects of the tumor microenvironment, which consists of stromal and immune cells . To explore the relationship between RSI and its associated unique tumor immune microenvironment, a study used RSI to assess the radiosensitivity of 10,469 primary tumor samples and to assess the immune environmental components of each tumor. The results showed that tumors with high immune cell content were more sensitive to radiation because they were enriched with leukocytes, which are highly sensitive to radiation. Furthermore, tumors estimated to be highly sensitive to radiotherapy exhibited significant enrichment of interferon-related signaling pathways and immune cell infiltration (i.e., CD8 + T cells, activated natural killer cells, M1 macrophages) . In the radiation-induced cancer immune cycle, intrinsic radiosensitivity affects cancer cell antigen release and immune status affects antigen-specific T cell activation . To elucidate the effect of tumor microenvironment on the efficacy of radiotherapy in glioma patients, a study analyzed the differences in the infiltration levels of immune cells. Patients were classified into a radiosensitive (RS) group and a radioresistant (RR) group. The results showed that the level of activated NK cell infiltration was significantly higher in the RS group, whereas the level of macrophage, Treg cell, and resting NK cell infiltration was significantly higher in the RR group, and the immune score and PD-L1 expression levels were significantly higher in the RR group than in the RS group. These results indicated that patients in the RR group had higher immunogenicity, higher TMB and mutational characteristics, which requires more clinical trials to demonstrate . Integrating tumor radiosensitivity and immune status to predict clinical outcomes In addition to focusing only on intrinsic tumor radiosensitivity, the integration of radiosensitivity features and immune features could predict the clinical outcomes of patients ( Table ) . One study has developed independent predictors of radiosensitivity signature (RSS) and an immune signature (IMS) in breast cancer patients treated with radiotherapy. When integrating both signatures, patients with radiosensitive or immune effective tumors gained better disease-specific survival (DSS) from radiotherapy. On the contrary, patients in the other group, defined as radiotherapy resistance and immunodeficient, had significantly lower DSS when they received radiotherapy. Individuals in the radiosensitive and immunodeficient or radiotherapy resistant and immune effective, there was no significant difference in DSS between treatment groups . Another study in the Cancer Genome Atlas (TCGA) dataset showed significantly higher PD-L1 expression in the RR group than in the RS group, and the PD-L1-high-RR group had the worst survival, so the analysis focused on this group of patients. These studies demonstrated that 31 genetic features and PD-L1 expression status as potential predictive markers for radiotherapy. Moreover, patients classified as PD-L1-high-RR exhibit radiotherapy resistance and immunosuppressive TME through multiple mechanisms and may benefit from radiotherapy combined with PD-1/PD-L1 blockers. Therefore, the integration of 31 genetic characteristics and PD-L1 expression status may help to classify the patient population that may benefit most from the combination of radiotherapy and PD-1/PD-L1 blockade in clinical practice . In addition, it has been shown that RSI and PD-L1 status predict clinical outcome in patients with glioblastoma multiforme. The 399 patients were divided into RS and RR groups based on radiosensitivity genetic markers and into PD-L1 high and PD-L1 low groups based on CD274 mRNA expression. Differential and comprehensive analyzes of expression and methylation data were performed. The results demonstrate the potential efficacy of radiotherapy in combination with PD-1/PD-L1 blockade and angiogenesis inhibition in the PD-L1-high-RR group . Tumor radiosensitivity is also governed by other features of cancer, including tumor microenvironment dynamics, nutrient utilization, and multiple cellular complexes. A study showed that the RR-PD-L1-high group had depleted B cells and had a significantly lower survival rate than the other groups, which predicted the prognosis of patients with locally advanced HNSCC . Some evidence points to the possibility that the pathways associated with radiosensitivity may also modulate the immunogenicity of tumors and predict their response to immunotherapy. For example, inactivation of the DNA repair mechanism may trigger an immune response and impair tumor growth by triggering the release of neoantigens, and the therapeutic efficacy of immunotherapy can be predicted by the presence of these DNA repair defects . Several common regulators of DNA repair and immune checkpoints have been identified, such as PARP inhibitors capable of DNA repair proficiency and radiosensitization of tumor cells . Several studies have demonstrated that the combined stratification of intrinsic radiosensitivity and immune status is superior to considering intrinsic radiosensitivity or immune status separately, and can therefore be used in preclinical evaluations to select patients or to determine whether radiation sensitizers and immunotherapy should be used together . With respect to whether immunotherapy modulates tumor intrinsic radiosensitivity, increasing evidence supports the idea that DNA repair defects modulate tumor immune checkpoints, but whether the immune checkpoints in turn modulate DNA repair pathways remains unclear, and this potential new mechanism by which immunotherapy modulates tumor intrinsic radiosensitivity still deserves further exploration in the future. Future Prospect Since intrinsic radiosensitivity and immune status affect the initial and effective phases of the radiation-induced cancer immune cycle, respectively, it is necessary to consider radiation in combination with immunity when selecting patients who may benefit from radiotherapy. Moreover, the prognostic value of RSI has been validated using multiple independent datasets, such as those used to predict the prognosis of patients treated with radiation for breast, pancreatic, glioblastoma, esophageal, and metastatic colorectal cancers . Despite the recognized differences in tumor radiosensitivity in preclinical and clinical settings, radiation dose prescriptions are not currently individualized in the field of radiation oncology based on the biology of the patient’s tumor. However, individualized adjustment of radiation dose based on patient tumor radiosensitivity is a promising strategy for effective radiotherapy, and radiosensitivity indices are expected to be potential biomarkers for combination radiotherapy and immunotherapy.
The intrinsic radiosensitivity of tumor cells is the main determinant of tumor response to radiation, which involves multiple tumor signaling pathways and molecular biological information (Fig. ). The cellular origin and differentiation of tumor tissues are the main factors affecting the radiosensitivity of tumor cells. Tumors originating from radiosensitive tissues are more sensitive to radiation, while poorly differentiated tumors are less sensitive to radiation . The radiosensitivity of an individual depends to a large extent on biological factors related to epigenetic factors, and the epigenetic mechanisms that determine the selection of metabolic patterns also contribute to the individual radiosensitivity and adaptability of an organism. On the one hand, DNA methylation affects the initial damage process, and on the other hand, methylation shift to ab initio type is associated with further development of protective and repair processes . However, the exact underlying genetic factors that contribute to the inter-individual differences in cellular radiosensitivity are unknown. Understanding the cellular and genetic basis of radiosensitivity and identifying individuals with higher or lower radiosensitivity will facilitate population risk assessment, disease prediction, individualized radiotherapy, and the development of radiation protection standards . Moreover, observations of human tumors have revealed a clear relationship between cell proliferation and cell renewal rates and radiosensitivity. Any tumor with rapidly average growth rate and elevated cell renewal rate is also more sensitive to radiation, and the cellular radiosensitivity differs in different periods, so the redistribution of cell cycle phases within the cell population after irradiation can alter the radiosensitivity. In spite of the many factors (i.e., dose, exposure volume, gender, age, underlying disease, and lifestyle) that may influence individual radiosensitivity and radiosensitivity to cancer, the inherent cellular radiosensitivity is genetically determined and supported by genetic alterations involving DNA damage repair . Genetic alterations in proteins involved in DNA damage repair are responsible for individual differences in radiation response. Genetic mutations in DNA repair response-related genes (i.e., p53, ATM, BRCA1, BRCA2, ERCC1, XRCC3 and Rad51) have also been found to be associated with radiosensitivity in lung cancer correlation . For instance, individuals with pure mutations in ATM have an approximately three-fold increase in radiosensitivity at the cellular, tissue and biological levels compared to average . The development of DNA-based markers is currently underway, and areas for additional research include the role of somatic mutations in DNA damage response genes that affect radiosensitivity. Exposure of cells to extracellular matrix proteins can increase radioresistance by promoting DNA damage repair and activation of the Akt/MAPK signaling pathway . It has been demonstrated that the anti-apoptotic protein nucleolin (C23) can enhance radiosensitivity in non-small cell lung cancer (NSCLC) by affecting the activity of DNA-dependent protein kinase (DNA-PK) . There is growing evidence that viral pathogenic factors are associated with the regulation of cellular radiation response, treatment outcome, and clinical prognosis in patients following radiotherapy, with the regulation of DNA damage repair mechanisms being the most common point of attack . Malignancies with a viral etiology are more immunogenic, such as human papillomavirus (HPV), Epstein-Barr virus (EBV) and other virus types that are more sensitive to anticancer therapy. One work identified a group of Head and neck squamous carcinoma (HNSCC) that may benefit from CRI and showed a significantly improved prognosis in patients with HPV-positive tumors, attributed to increased intrinsic radiosensitivity and possibly to the modulation of cytotoxic T-cell responses in the tumor microenvironment . Recent study indicated that for HPV-positive HNSCC, the virus hijacked cellular mechanisms of DNA repair, altered cell cycle distribution, induced cell proliferation and displayed peculiar hypoxic kinetics during radiation treatment . The mechanism described involves a reduced ability to repair DNA double-strand breaks, accompanied by enhanced radiation-induced G2/M cell cycle arrest . Additionally, excessive expression of immune checkpoints was also strongly associated with radiosensitivity. This finding suggested that high PD-1 expression was significantly associated with the clinical prognosis of HPV/p16-positive HNSCC. Patients in the radioresistant group and HPV/p16-negative group with radioresistant genetic markers could benefit from combination CRI . The central research on EBV-regulated radiation response has focused on LMP-1, which is expressed in most EBV-associated malignancies.LMP-1 inhibits DNA double-strand break repair by inhibiting the phosphorylation and activity of DNA-PKcs, a key enzyme of the NHEJ pathway in nasopharyngeal carcinoma (NPC), and by inhibiting ATM repair of DNA double-strand breaks . In addition to the tumor cells themselves, environmental factors such as oxygenation status may also affect radiosensitivity by further modulating damage induction and cellular responses . Therefore, as a classical regulator of tumor radiation resistance, the elimination of hypoxia may be a potential solution to address radioresistance . Hypoxia inducible factor-1 (HIF-1) remains active in cells that survive radiation therapy and is associated with tumor cell resistance to radiotherapy. It has been suggested that it may modulate tumor radioresistance through reprogramming of glucose metabolism and cell cycle regulation . Tumors contain different proportions of intrinsically radioresistant cancer stem cell (CSC), which are closely associated with tumor hypoxia, and HIF-1α contributes to the development and maintenance of the CSC phenotype . The radioresistance of CSC is characterized by a reduced accumulation of radiation-induced DNA damage and increased activation of anti-apoptotic signaling pathways compared to differentiated tumor cells . Current strategies for predicting normal tissue radiosensitivity are genomics and large-scale prospective studies, and further research is still needed to explore the best predictive methods for radiosensitivity .
The radiosensitivity of tumor cells is strongly influenced by molecular variation at the genomic, transcriptional and translational levels. Radiosensitivity is a measure of the response of cells, tissues or individuals to ionizing radiation and can be used to predict which individuals will benefit from radiotherapy. Recent advances in gene sequencing technology and microarray technology for high-throughput RNA analysis have driven interest in identifying features that measure the intrinsic radiosensitivity of tumor cells. The development of a successful predictive analysis of radiosensitivity has been a major goal of research, and many genetic markers have been developed to predict the radiosensitivity of tumors . These methods can be broadly divided into two categories: one is the characterization of the surviving fraction of cancer cell lines formed after radiation, which reflects the intrinsic radiosensitivity of cancer cells, but fails to consider the influence of non-malignant cells in the tumor microenvironment, particularly the role of anti-tumor immunity . The second is the prediction of patient progression after radiotherapy. This is dedicated to predicting the clinical outcome of radiotherapy, but cannot be used for cellular level studies and it is difficult to reveal radiobiologically based mechanisms . Nevertheless, how to build a radiosensitivity prediction model has not been discussed systematically in these recent years. The traditional experimental approach to determine intrinsic radiosensitivity is the survival of tumor cell lines at a single dose of 2 Gy (SF2), but is not applicable for routine use and alternative strategies must be sought. The radiosensitivity index (RSI) is a 10-gene model based on the survival of 48 human cancer cell lines at SF2 radiation and is a measure of clonogenic survival after a given radiation dose . The 10-gene model (AR, cJun, STAT1, PKC, RelA, cABL, SUMO1, CDK1, HDAC1, and IRF1) that hold a crucial role in DNA damage response, histone deacetylation, cell cycle regulation, apoptosis and proliferation . The RSI prediction model is a linear regression algorithm and is independent of the cancer type. RSI is designed to detect intrinsic tumor radiosensitivity independently of cancer type and has been independently validated as a pan-tissue biomarker of radiosensitivity at multiple disease sites . The 31-genes were developed by analyzing a panel of NCI-60 cancer cells that were associated with SF2 expression, and its correlation with radiosensitivity has been validated in various malignancies . Similarly, measuring the oxygen partial pressure of a tumor can indicate its level of hypoxia, which can help predict its radiosensitivity . Unfortunately, these parameters, even when used in combination, are insufficient to predict tumor radioresistance for clinical use. Since the relationship between radiation dose and survival is nonlinear, various mathematical formulas have been proposed to fit the radiation survival curve. The linear quadratic (LQ) model has become the most popular calculator for analyzing and predicting ionizing radiation response in the laboratory and in the clinic, where the α/β ratio is used to characterize the sensitivity of specific tissue types to segmentation . The LQ model provides a simple equation between cell survival and delivered dose: S = exp (-αD-βD 2 ) . The radiosensitivity of cells is influenced by complex interactions between intrinsic polygenic traits. As the mechanisms and biomarkers of radiosensitivity have become better understood, gene expression classifiers containing few key genes have been used to predict radiosensitivity in specific tumor types or various human cancers . Based on RSI, LQ model, and the time and dose of radiotherapy received by each patient, a team derived a genome-based model for adjusting radiotherapy dose (GARD) on more than 8,000 tumor samples from more than 20 tumor types . The GARD predicts the efficacy of radiotherapy and guides the radiation dose to match the individual tumor radiosensitivity, with higher GARD values associated with better efficacy of radiotherapy. Given that the range of GARD values varies among different types of cancer, the use of RSI alone cannot be a complete representation of the treatment effect, and we need to combine the means of tumor type and genetic testing to determine the appropriate radiotherapy dose for individual patients. Besides the classical biological mechanisms mentioned above, gene sequencing has further revealed the regulatory role of non-coding RNAs on radiosensitivity, and their high-throughput properties contribute to the study of radiosensitivity mechanisms. A previous study used a gene expression classifier to predict radiosensitivity, which regarded radiosensitivity as a continuous variable, used microarray significance analysis for gene selection, and multiple linear regression model for radiosensitivity prediction . Three new genes (RbAp48, RGS19 and R5PIA) were identified in the gene selection step, and their expression values were correlated with radiosensitivity and were transfected with cancer cell lines. The results established that the RbAp48 gene could induce radiosensitivity 1.5-2 times, and increased the proportion of cells in G2-M phase of cell cycle. In addition, the study also showed that the overexpression of RbAp48 was related to the dephosphorylation of Akt, which suggested that RbAp48 may exert its effects by antagonizing the Ras pathway. This study established that radiosensitivity can be predicted based on gene expression profiles and introduced a genomic approach to identify novel molecular markers of radiosensitivity . Moreover, some traditional pathology techniques remain valid for measuring tumor radiosensitivity. For instance, hematoxylin and eosin staining can be used to identify radiosensitive (i.e., seminoma) or radioresistant (i.e., glioma) tumors (Fig. ). More advanced pathologic techniques such as DNA methylationome analysis are now used to classify tumors, but have not yet guided the clinical prescription of radiotherapy doses. The current strategy for predicting normal tissue radiosensitivity is genomics and large-scale prospective studies, and further studies are still needed to explore the best predictive methods for radiosensitivity.
Unsatisfactory radiosensitivity has been plagued, and finding biomarkers that predict radiosensitivity could help improve the efficacy of radiotherapy. Chromosomal aberrations and DNA damage, in particular DSB, are among the few cellular markers that have some correlation with cellular radiosensitivity. Signaling pathway molecules involved in the DNA damage response are excellent candidates for the evaluation of radiosensitivity biomarkers, and relevant biomarkers include MRE11, AIMP3, NBN, and BRE, with MRE11 potentially a predictive biomarker for radiotherapy benefit . The current research suggests that γ-H2AX assay as a rapid and sensitive biomarker can be used in epidemiological studies to measure changes in radiosensitivity. The use of γ-H2AX lesion analysis as well as DSB repair gene polymorphisms can be used to assess cellular radiosensitivity, which will assist in population risk assessment, disease prediction, individualized radiotherapy, and the development of radiation protection standards . Additionally, evaluation of the predictive significance of the systemic immune-inflammatory index (SII) on overall survival and radiosensitivity in advanced NSCLC showed favorable radiosensitivity in the low SII group, and higher SII levels were associated with poorer overall survival and radiosensitivity . Cellular radiosensitivity can be assessed by quantifying DSB damage and repair . It has been observed that among the different types of DNA damage, DSBs have the slower and most lethal repair dynamics. Therefore, they are more helpful in explaining clinical radiosensitivity than other types of damage with rapid repair dynamics . The development of DNA-based markers is currently underway and areas for further research include the role of somatic mutations in DNA damage response genes that affect radiosensitivity . The molecular mechanisms involved in the radiation-induced response are complex and the expression levels of genes do not consistently represent the properties of all proteins in the tumor cells. The proteomic approach allows the identification of various proteins involved in the cellular response to ionizing radiation, which may be useful in identifying potential candidates for use as predictive biomarkers. The expression levels of genes do inconsistently represent the nature of all proteins in normal or tumor cells, and therefore direct detection of protein expression may be more effective in determining the complexity of the mechanisms and the large number of molecular signatures involved in the cellular radiation-induced responses. The radiosensitivity of tumors is related to the basal expression levels of intracellular or cell membrane proteins, and the direct detection of protein expression using proteomics studies allows the detection of protein sequences and post-translational modifications stored in genes that can be used for early diagnosis, prognosis and treatment of cancer . Current proteomics technologies can be used to detect and analyze proteomic information using cells, tissues or body fluids, providing a better platform for biomarker research and development . High throughput radio proteomics is the latest tool, where mass spectrometry (MS) is used to analyze and identify unknown proteins by converting protein molecules into gas phase ions through an ionisation source and applying the electromagnetic field of the instrument to separate proteins with a specific mass-to-charge ratio. MS has the advantage of quickly analysis, high sensitivity and resolution. The advantages of MS are its speed, sensitivity and resolution. Proteomics research based on liquid chromatography-mass spectrometry (LC-MS) is now widely used . The intrinsic radiosensitivity of NSCLC is mainly regulated by the signal pathways in the proteoglycans, focal adhesion and the actin cytoskeleton in cancer. Radiosensitivity-specific proteins can guide clinical individualized radiotherapy by predicting radiation response in NSCLC patients .
In the era of immunotherapy, reliable genomic predictors to identify optimal patient populations in CRI are lacking. A comprehensive analysis of radiosensitivity-associated genes and proteins in lung cancer and other solid tumors has been used to identify potential biological predictors of radiosensitivity . There are some evidences that radiosensitivity could predict the effect of radiotherapy and immunotherapy ( Table ) . To determine first whether tumor radiosensitivity correlates with immune system activation in all tumor types, Tobin et al. identified 10,240 genotypically distinct solid primary tumors using 12 chemokine genes to define intratumor immune activation and determined that low RSI was significantly associated with elevated immune activation, supporting the association of RSI with immune-related signaling networks in patients’ tumors (using an RSI threshold of 0.3745) . In another study, a total of 12,832 primary tumors from 11 major cancer types were analyzed in relation to DNA repair and immune subtypes in order to determine whether genomic scores of radiosensitivity were associated with immune responses. The results found that RSI was related with various immune-related signatures and predicted responses to PD-1 blockade, emphasizing the promising potential of RSI as a candidate biomarker for CRI . In addition, a study also identified enhanced immune checkpoint interactions in radioresistant tumors, providing a new theoretical basis for radiotherapy and ICIs for the treatment of HNSCC . The RSI-low may be characterized by higher genomic instability and subsequently higher mutational burden, which associated with predicted efficacy of dominant IFN-γ signaling responses and PD-1 blockade. Taken together, RSI-Low tumors may represent a special subgroup and therapeutic target for immunotherapy . The molecular mechanisms underlying the biological effects of radiotherapy can affect the response and repair of cells to DSBs, but there is currently limited research on the mechanisms of RSI and immune response .Research has found that RSI is associated with various immune-related genomic and molecular characteristics, and low RSI is correlated with dominant response to IFN-γ signaling and predicted efficacy of PD-1 blocking agents . Lower RSI is linked to higher HRD scores and higher TMB, indicating the presence of defective DNA repair mechanisms and potential for response to immune based therapies . Besides, lower RSI is also correlated with higher RNA stemness score, indicating higher degrees of stemness and tumor de-differentiation, which is also related to increased PD-L1 protein expression .To further explore the relationship between RSI and immune response, a team used whole transcriptomic and matched proteomic data from 12,832 primary and 585 metastatic tumors and found that RSI was associated with a variety of immune-related genomic and molecular features. Lower RSI was associated with higher homologous recombination deficiency (HRD) scores and higher tumor mutational burden (TMB), suggesting the presence of defective DNA repair mechanisms and response potential to immune-based therapies . HRD scores were correlated with genes involved in homologous repair, including BRCA1, BRCA2, RAD51B, and RAD51C, and alterations in these genes were related to radiosensitivity . Intriguingly, the RSI-Low tumors exhibit both higher microsatellite instability (MSI) and TMB molecular profiles in gastric cancer, which were shown to be subgroups with favorable prognosis after immunotherapy . Furthermore, since RSI genes (STAT1 and IRF1) are downstream of IFN-γ-mediated signaling, RSI correlates better with various immune-related molecular features and phenotypes than other genes and genetic features associated with radiation response . At the same time, the role of the immune system is crucial for tumor radiosensitivity. To explore the relationship between intrinsic tumor radiosensitivity and the immune system, a study has investigated radiation-induced tumor equilibrium and dormancy in animal models, and whether host immune responses contribute to radiation-induced tumor equilibrium . The study developed two mouse models—TUBO (HER2-positive breast cancer) and B16 (melanoma), and has observed four possible tumor responses to radiotherapy. These were non-responsive tumors (non-responsive to radiotherapy); responsive tumors (tumor regression observed within 10 days after radiation); stable tumors (tumors that regress and remain stable and palpable during any 34-60-day observation period); late recurrent tumors (tumor recurrence after 60 days). The inherent cellular radiosensitivity of tumors is frequently hypothesized to explain the observed differences in tumor regeneration rates observed after radiotherapy, and this study determined the radiosensitivity of tumor cells taken from mice that responded variably to radiotherapy. These tumors were surgically removed and digested into single cell suspensions and subjected to 2, 5, or 10 Gy of in vitro irradiation and assessed with clonogenic assays. The results demonstrated that tumor cells with different responses to radiation in vivo exhibited indistinguishable radiosensitivity in vitro. This finding revealed that the degree of tumor cells radiosensitivity was unable to explain the different tumor responses to local radiotherapy, in contrast to immune cells and their cytokines, which have been shown to exhibit a pivotal role in inhibiting tumor cell regeneration in two experimental animal model systems. Traditional radiosensitivity studies have focused on tumor cells, neglecting the effects of the tumor microenvironment, which consists of stromal and immune cells . To explore the relationship between RSI and its associated unique tumor immune microenvironment, a study used RSI to assess the radiosensitivity of 10,469 primary tumor samples and to assess the immune environmental components of each tumor. The results showed that tumors with high immune cell content were more sensitive to radiation because they were enriched with leukocytes, which are highly sensitive to radiation. Furthermore, tumors estimated to be highly sensitive to radiotherapy exhibited significant enrichment of interferon-related signaling pathways and immune cell infiltration (i.e., CD8 + T cells, activated natural killer cells, M1 macrophages) . In the radiation-induced cancer immune cycle, intrinsic radiosensitivity affects cancer cell antigen release and immune status affects antigen-specific T cell activation . To elucidate the effect of tumor microenvironment on the efficacy of radiotherapy in glioma patients, a study analyzed the differences in the infiltration levels of immune cells. Patients were classified into a radiosensitive (RS) group and a radioresistant (RR) group. The results showed that the level of activated NK cell infiltration was significantly higher in the RS group, whereas the level of macrophage, Treg cell, and resting NK cell infiltration was significantly higher in the RR group, and the immune score and PD-L1 expression levels were significantly higher in the RR group than in the RS group. These results indicated that patients in the RR group had higher immunogenicity, higher TMB and mutational characteristics, which requires more clinical trials to demonstrate .
In addition to focusing only on intrinsic tumor radiosensitivity, the integration of radiosensitivity features and immune features could predict the clinical outcomes of patients ( Table ) . One study has developed independent predictors of radiosensitivity signature (RSS) and an immune signature (IMS) in breast cancer patients treated with radiotherapy. When integrating both signatures, patients with radiosensitive or immune effective tumors gained better disease-specific survival (DSS) from radiotherapy. On the contrary, patients in the other group, defined as radiotherapy resistance and immunodeficient, had significantly lower DSS when they received radiotherapy. Individuals in the radiosensitive and immunodeficient or radiotherapy resistant and immune effective, there was no significant difference in DSS between treatment groups . Another study in the Cancer Genome Atlas (TCGA) dataset showed significantly higher PD-L1 expression in the RR group than in the RS group, and the PD-L1-high-RR group had the worst survival, so the analysis focused on this group of patients. These studies demonstrated that 31 genetic features and PD-L1 expression status as potential predictive markers for radiotherapy. Moreover, patients classified as PD-L1-high-RR exhibit radiotherapy resistance and immunosuppressive TME through multiple mechanisms and may benefit from radiotherapy combined with PD-1/PD-L1 blockers. Therefore, the integration of 31 genetic characteristics and PD-L1 expression status may help to classify the patient population that may benefit most from the combination of radiotherapy and PD-1/PD-L1 blockade in clinical practice . In addition, it has been shown that RSI and PD-L1 status predict clinical outcome in patients with glioblastoma multiforme. The 399 patients were divided into RS and RR groups based on radiosensitivity genetic markers and into PD-L1 high and PD-L1 low groups based on CD274 mRNA expression. Differential and comprehensive analyzes of expression and methylation data were performed. The results demonstrate the potential efficacy of radiotherapy in combination with PD-1/PD-L1 blockade and angiogenesis inhibition in the PD-L1-high-RR group . Tumor radiosensitivity is also governed by other features of cancer, including tumor microenvironment dynamics, nutrient utilization, and multiple cellular complexes. A study showed that the RR-PD-L1-high group had depleted B cells and had a significantly lower survival rate than the other groups, which predicted the prognosis of patients with locally advanced HNSCC . Some evidence points to the possibility that the pathways associated with radiosensitivity may also modulate the immunogenicity of tumors and predict their response to immunotherapy. For example, inactivation of the DNA repair mechanism may trigger an immune response and impair tumor growth by triggering the release of neoantigens, and the therapeutic efficacy of immunotherapy can be predicted by the presence of these DNA repair defects . Several common regulators of DNA repair and immune checkpoints have been identified, such as PARP inhibitors capable of DNA repair proficiency and radiosensitization of tumor cells . Several studies have demonstrated that the combined stratification of intrinsic radiosensitivity and immune status is superior to considering intrinsic radiosensitivity or immune status separately, and can therefore be used in preclinical evaluations to select patients or to determine whether radiation sensitizers and immunotherapy should be used together . With respect to whether immunotherapy modulates tumor intrinsic radiosensitivity, increasing evidence supports the idea that DNA repair defects modulate tumor immune checkpoints, but whether the immune checkpoints in turn modulate DNA repair pathways remains unclear, and this potential new mechanism by which immunotherapy modulates tumor intrinsic radiosensitivity still deserves further exploration in the future.
Since intrinsic radiosensitivity and immune status affect the initial and effective phases of the radiation-induced cancer immune cycle, respectively, it is necessary to consider radiation in combination with immunity when selecting patients who may benefit from radiotherapy. Moreover, the prognostic value of RSI has been validated using multiple independent datasets, such as those used to predict the prognosis of patients treated with radiation for breast, pancreatic, glioblastoma, esophageal, and metastatic colorectal cancers . Despite the recognized differences in tumor radiosensitivity in preclinical and clinical settings, radiation dose prescriptions are not currently individualized in the field of radiation oncology based on the biology of the patient’s tumor. However, individualized adjustment of radiation dose based on patient tumor radiosensitivity is a promising strategy for effective radiotherapy, and radiosensitivity indices are expected to be potential biomarkers for combination radiotherapy and immunotherapy.
In this review, we first present the mechanisms underlying the interaction between radiotherapy and immunotherapy, where radiotherapy serves as an essential adjunct to immunotherapy by providing a source of danger signals, antigens and activation of innate immunity. Similarly, immunotherapy can sensitize tumors to subsequent radiotherapy, reducing the radiation dose required to eradicate them. We next describe the effect of tumor cell radiosensitivity and the method to predict it. The biological effects of radiation are mediated by a complex network of signaling pathways, and advances in genomics can be used to guide radiotherapy alone or in combination, and the commercialization of genomic-based tools will be important to facilitate its implementation. Furthermore, radiosensitivity holds favorable promise for the predictive role and clinical application of radiation-free combination, and future clinical investigations will need to emphasize the implementation of preclinical and translational discovery data in the development of new clinical trials to demonstrate reproducibility in the patient setting and to help optimize the efficacy of their combination therapy. In summary, the radiosensitivity of tumor cells can help predict the efficacy of CRI and the integration of immune status with radiosensitivity can also help better predict clinical outcome. In the future, the treatment of CRI should rely on the mining and detection of multiple biomarkers to achieve precision oncology.
|
Trans-cardiac perfusion of neonatal mice and immunofluorescence of the whole body as a method to study nervous system development | 2e4b4a2a-e7ec-4f72-a6d0-675da310dd1e | 9560478 | Anatomy[mh] | The brain is a highly complex structure consisting of various cell types . The adult rodent brain has been extensively studied for the last few decades, particularly focusing on understanding the structural and functional connection between neurons and glial cells. Nevertheless, the knowledge on how these connections are formed during development is still limited. To study the structure and expression patterns of the brain and its different cell types requires the preservation of the integrity of the tissue. To admire this, many methods are based on trans-cardiac perfusion of animals with formaldehyde solution . In this method, the fixative solution is introduced through the left ventricle of the heart to the vascular system and reaches all the cells via the circulatory system and the capillary net . There is also the possibility of fixating the tissue via immersion . However, the effectiveness of this method is limited, depending on the size of the specimen, the fixative does not reach the inner cell layers . In addition, the fixation by perfusion provides faster preservation than the fixation by immersion. The perfusion of adult mice and other rodents has been long established , serving as a standard method for multiple structural and biochemical analysis. However, the conduction of the same method in neonatal rodents, or during their early development, is not well described in literature. In neonatal pups the circulatory system is harder to reach, to manipulate and to use for perfusion. This entails one of the main problems when perfusing such small animals. The perfusion of neonatal pups has been reported in the literature before, however, in the methods section the specific protocol is usually not specified, making it hard to have a reliable result . Moreover, a reliable method to study multiple tissues in one section during developmental stages is lacking in literature when investigating for example the interaction between multiple tissues, like the gut-brain axis . In this respect, our protocol maintains the original structure of the complete neonatal mouse pup, allowing the staining of multiple tissues in the same slide. Thus, resources are saved and researchers have a fast procedure on hand that provides them with valuable tissue leading to high quality immunohistochemical stainings that can help dissect tissue interactions and their role in neural development. In this study, we provide the exact settings and a detailed protocol on how to successfully perform trans-cardiac perfusion in neonatal mice followed by whole body immunohistochemical stainings.
The protocol described in this peer-reviewed article is published on protocols.io, dx.doi.org/10.17504/protocols.io.bp2l61ow5vqe/v1 and is included for printing as with this article. Fixation by immersion Neonatal mice pups were anesthetized using a mix of Xylazin and Ketamine (520mg/Kg ketamine and 78mg/Kg Xylacine) in a saline solution. 20uL of this mix were injected into the intraperitoneal area using a small insulin syringe. Once we checked the heartbeat and the breathing stopped, the paw reflex was checked. A small incision was made in the thorax of the pups leaving the organs exposed. After, pups were immersed in a 4% w/v PFA solution, and left for 24 hours at 4°C. The day after, pups were washed with PBS-/- and immersed in a gradient of sucrose (10%, 20%, 30%) and left until they dropped to the bottom of the falcon. After that, freezing and cryosectioning of the samples were performed following the protocol mentioned previously. Animals All animal experiments were performed in compliance with the guidelines for the welfare of experimental animals issued by the Federal Government of Germany and approved by the Regierungspraesidium Tübingen and the local ethics committee at Ulm University (ID Number: O.103). C57BL/6 mice were used for breeding. They were housed under constant temperature (22 ± 1°C) and humidity (50%) conditions with a 12 h light/dark cycle and provided with food and water ad libitum .
Neonatal mice pups were anesthetized using a mix of Xylazin and Ketamine (520mg/Kg ketamine and 78mg/Kg Xylacine) in a saline solution. 20uL of this mix were injected into the intraperitoneal area using a small insulin syringe. Once we checked the heartbeat and the breathing stopped, the paw reflex was checked. A small incision was made in the thorax of the pups leaving the organs exposed. After, pups were immersed in a 4% w/v PFA solution, and left for 24 hours at 4°C. The day after, pups were washed with PBS-/- and immersed in a gradient of sucrose (10%, 20%, 30%) and left until they dropped to the bottom of the falcon. After that, freezing and cryosectioning of the samples were performed following the protocol mentioned previously.
All animal experiments were performed in compliance with the guidelines for the welfare of experimental animals issued by the Federal Government of Germany and approved by the Regierungspraesidium Tübingen and the local ethics committee at Ulm University (ID Number: O.103). C57BL/6 mice were used for breeding. They were housed under constant temperature (22 ± 1°C) and humidity (50%) conditions with a 12 h light/dark cycle and provided with food and water ad libitum .
Analysis of neonatal mouse pups in immunohistochemical stainings is an essential tool to study the development and maturation of cells and organs. An easy approach is to submerge the whole pup into formaldehyde solution and to proceed from there following standard procedures . However, by this the tissue is not cleared from blood as seen from the pale color of the skin ( , left panel) in comparison to well-cleared perfused tissue ( , right panel). The liver is a good indicator of the clearance by turning pale ( , magnification). In addition to this, perfusion offers several advantages . The whole body can be cut and stained in one, while one tissue at a time is usually fixed when using immersion. The quality of the staining improves significantly, cryo-sectioning of the whole pup becomes possible since the skin is not sticking to the cryo-knife, the whole body gets fixed equally through the distribution of the formaldehyde solution through the circulatory system and the procedure is faster than perfusion by immersion. The setup for the perfusion is displayed in . After anesthesia, the chest is opened as depicted in . Essentially, the right atrium of the heart has to be cut as indicated with the blue cross, to allow the fixative to exit the body. Furthermore, a size 27G needle has to be used for the perfusion, since we found that larger needles cannot reliably be introduced into the mouse heart, leading to eventual failure of the perfusion. Further, using a 27G needle makes it easier, compared to the ones usually used on adults, to control the speed and volume of the buffers that are introduced into the circulatory system and allows them to follow a constant flow. Before introducing the formaldehyde solution in the circulatory system, the blood has to be cleared out with a saline solution, in this case PBS-/-, so the fixative solution can reach all the tissues equally. As with other fixation methods , the pH and osmolality of the solution has to be adjusted. The pH has to match the pH of the blood to keep the tissue as ideal as possible and not cause acidosis or alkalosis, which could damage the tissues of interest . In addition, the velocity and the pressure of the pump as well as the fixation time have to be adjusted to avoid the rupture of any vessel and consequently the failure of the procedure and loss of the sample. We tried different velocities and ended up using a velocity of 1mL/min to compromise the pressure the tissue can stand, the lowest velocity the pump is able to keep without standing still and the time the perfusion takes. To make sure that we cleared all the blood out we used 10mL of PBS-/-. Afterwards, we used the same volume of formaldehyde solution to fix all the tissues. Counting both parts, we ended up with a total fixation time of 20min (as shown in ). Perfusion is completed, as soon as the color of the liver has changed from red to a pinkish appearance. After perfusion, we submerge the whole pup in formaldehyde solution followed by a gradual dehydration in sucrose solution . The pup was then embedded in gelatin (see for details). We found that gelatin works significantly better than other embedding solutions for cryo-sectioning as for example Tissue Tek O.C.T. (O.C.T.), since the gelatin sticks well to the skin of the pup and therefore prevents the skin from separating from the rest of the tissue while cutting later for immunohistochemistry . To demonstrate immunofluorescent staining using this perfusion method, we used antibodies directed against Ionized Calcium-Binding Adapter Molecule 1 (Iba1), expressed in macrophages and microglia , Glial Fibrillary Acidic Protein, expressed in astrocytes and Schwann cells (GFAP) , α-Actinin (a protein of the muscle sarcomere) , and counterstained the section with DAPI. The overview in shows that all organs of the pup are stained equally well. The tissues stayed in place and were neither disrupted nor were the organs pulled apart. The skin was still surrounding the whole section. This method is suited to both assess different organs at the same time, as shown in the overview , but also to perform detailed acquisition of high-quality tissue, as shown in the magnifications . The regions shown in were chosen to display that the cellular organization of different tissues of interest, such as the brain, spinal cord and intestine, is well maintained after perfusion and immunocytochemistry. The advantage of staining the whole body in one section, and having multiple tissues on the same slide as shown in , allows for the study of the interaction of various tissues or organs during development. For example, it has recently been described that the gut impacts the brain during development . To show the advantage of this technique to be used for this type of studies we decided to show stainings from the nervous system, including the brain cortex along with a staining from the digestive system, like the intestines . Iba1 is expressed in macrophages and microglia , GFAP is expressed in multiple glial cells, including astrocytes . Although Iba1 positive cells are distributed throughout the body , in the brain Iba1 positive microglia specially localize in the subventricular zone , and their distribution becomes more dispersed towards the cortex , as published previously . In the intestine the distribution of Iba1 positive macrophages in the muscularis externa of the intestine , and the α-Actinin positive muscle cells covering the villus can be observed. The distribution of astrocytes in the spinal cord reflects the distinction between gray matter and white matter with more astrocytes being expressed in white matter. In addition, α-Actinin expressing muscle cells are visible. This method also allows for the cutting and staining of bone tissue . Next to the GFAP-positive spinal cord and the α-Actinin-positive muscle cells the spinal bones are stained with DAPI .
We demonstrate that trans-cardiac perfusion of the neonatal mouse offers a fast and reliable way to obtain tissue for quality immunofluorescent staining. While there are other possibilities to fix tissues, they don´t provide a complete fixation and excellent preservation of multiple organs and structures at the same time. The benefits of this approach as compared to others that have been used until now , are that we are able to get rid of the blood in the circulatory system, and thus, it can be used for the fixation of the different organs, providing more stable and compact tissue. Moreover, the complete fixation of the body provides an easier manipulation while cutting, since it also fixes the skin and connective tissue, which are the parts of the body that can pose a bigger difficulty during the complete body sectioning. Another advantage of this protocol is that you can have the tissue ready for immunohistochemistry in a short period. While perfusion by submersion can take between 24–48 hours, the trans-cardiac perfusion roughly takes 20 minutes plus the post-fixation time, which can vary from 1 hour (if you have only one tissue, e.g. brain) to 12 hours (if you have the whole body). Therefore, trans-cardiac perfusion of neonatal pups helps to advance and broaden the knowledge of early developmental stages of the mouse.
S1 File Step-by-step protocol, also available on protocols.io. (PDF) Click here for additional data file. S1 Video Video tutorial of trans-cardiac neonatal perfusion. Part1. (MP4) Click here for additional data file. S2 Video Video tutorial of trans-cardiac neonatal perfusion. Part2. (MP4) Click here for additional data file.
|
Approach to goitre in family medicine practice | 9c2e85d2-8161-4444-89ed-5fdc794aca21 | 9728309 | Family Medicine[mh] | A goitre is an abnormal enlargement of the thyroid gland. It can present either as a solitary nodule or diffuse enlargement and may be associated with symptoms of hypothyroidism or hyperthyroidism.
Goitre is a common condition in primary care practice. It can present as a complaint of a neck lump with or without associated mass effect, such as dysphagia, stridor or dysphonia. Patients may have clinical manifestations of hyperthyroidism or hypothyroidism. Based on a study of 15,008 adults residing in ten iodine-replete cities across China, the prevalence of goitres detected on ultrasonography was 15.7%. With the advent of computed tomography and magnetic resonance imaging, family physicians may also encounter patients with incidental thyroid nodules, found in up to 16% of these scans. Thyroid malignancies can occur in 7%–15% of nodules and depend on individual risk factors such as age, gender, radiation exposure and family history. During the initial assessment of a patient with a goitre, general practitioners must consider the myriad of causes and appropriate management at the primary care level. A frequent dilemma is the urgency of referral and whether to refer the patient to an endocrinologist or a surgeon. We provide a summary of the common causes of goitre encountered in family practice and provide a management algorithm to aid in the right siting and care of this group of patients.
Clinical approach In a patient who presents with an anterior neck mass, it is prudent to consider other differential diagnoses apart from a goitre . These can be categorised into three main causes. Congenital abnormalities are usually non-tender and slow growing. They remain asymptomatic unless an infection occurs, which can result in a tender mass with fever or discharging sinuses. Inflammatory causes from cervical lymphadenopathy can be attributed to either infective or non-infective causes. Submental cervical lymphadenitis tends to be associated with infections of the lip, floor of the mouth and skin of the cheeks. The referral for imaging and evaluation should be considered in patients with persistent lymphadenopathy despite 6 weeks of monitoring or suspected bacterial lymphadenopathy with worsening symptoms despite initial antibiotic treatment. Neoplastic lesions can be either benign or malignant. Benign masses, which are generally slow growing, include lipomas, epidermal cysts or neuromas. Malignant masses may be due to primary cancers (such as lymphomas or sarcomas) or lymph node metastases from cancers of the head and neck, upper respiratory tract, oesophagus or a distant site. History The following four main questions in history-taking are helpful in distinguishing the causes of the goitre. What are the characteristics of the goitre and its possible triggers? Duration: Has it been present since childhood? Involvement: Is it diffusely enlarged or a solitary nodule? The rate of growth: Is it slow growing or rapidly enlarging? (rapid enlargement may be suggestive of malignancies such as anaplastic thyroid carcinoma or lymphoma) Associated symptoms: (a) Focal pain or fever would be suggestive of thyroiditis. Patients may present with sudden onset of pain with rapid enlargement if spontaneous haemorrhage into a thyroid nodule occurs. Anaplastic thyroid carcinoma may also present as a rapidly enlarging and painful neck mass and (b) ophthalmic symptoms in Graves’ disease include diplopia, blurring of vision, orbital pain or gritty sensation with increased tearing Recent upper respiratory tract infections in up to 2–8 weeks may precipitate subacute thyroiditis Iodine-deficient diets lead to the formation of colloid nodular goitres A personal history of autoimmune diseases (such as myasthenia gravis, Addison’s disease, pernicious anaemia, type 1 diabetes mellitus, rheumatoid arthritis, systemic lupus erythematosus or vitiligo) is associated with autoimmune thyroid diseases such as Graves’ disease and Hashimoto’s thyroiditis. Are there any compressive symptoms? Compressive symptoms due to impingement or displacement of the trachea, oesophagus or great vessels can occur in large goitres or those with retrosternal extension. This is due to the confined space of the thoracic inlet. Tracheal compressions may manifest as dyspnoea, stridor, wheezing or cough. Depending on the severity of compression, symptoms may occur at rest, on exertion or with positional changes. Patients with goitres with intrathoracic extension may experience dyspnoea during manoeuvres that push the thyroid into the thoracic inlet, such as bending forward or lying supine. Hoarseness of voice may be seen in patients with invasion or compression of the recurrent laryngeal nerve, causing transient or permanent vocal cord paralysis. Occasionally, goitres may compress the cervical sympathetic chain, resulting in Horner’s syndrome with a triad of ptosis, miosis and decreased sweating on the ipsilateral side of the face. Patients may have dyspnoea due to phrenic nerve paralysis. Rarely, superior vena cava syndrome manifesting as facial swelling and jugular vein thrombosis may develop. Are there clinical manifestations of hyperthyroidism or hypothyroidism? The functional nature of a goitre affects the differentials which should be considered. Symptoms of hyperthyroidism include palpitations, diarrhoea, weight loss despite increased appetite, heat intolerance, oligomenorrhoea or anxiety. Classical signs such as tremor or hyperactivity may be absent in the elderly with ‘apathetic thyrotoxicosis’, whose predominant symptom may be that of lethargy and weakness. The differential diagnoses to consider in goitres with hyperthyroidism can be found in . Conversely, patients with hypothyroidism may have lethargy, cold intolerance, weight gain, depression, constipation, severe bradycardia, hypothermia or altered sensorium with confusion or obtundation in myxoedema coma. Are there any associated symptoms suggestive of malignancy (including risk factors)? Risk factors of a thyroid malignancy include male gender, age less than 20 years or more than 65 years, history of head and neck radiation, family history of thyroid cancer and multiple endocrine neoplasia type 2 (MEN2). Features suggestive of malignancy include: Rapid growth of the goitre over time, which may suggest anaplastic thyroid carcinoma or lymphoma Hard, single nodule and/or nodules fixed to surrounding structures Hoarseness due to recurrent laryngeal nerve invasion Non-resolving cervical lymphadenopathy Symptoms or signs of distant metastases Symptoms suggestive of thyroid lymphoma, such as fever, weight loss and night sweats. Physical examination The physical examination can be grouped under three main categories, examination of the goitre and the surrounding structures, and identifying clinical signs of thyroid dysfunction and extrathyroidal signs specific to Graves’ disease. Examination of the goitre and surrounding neck structures Inspection: Look for scars indicating previous thyroid surgery or injury. Is there a diffuse enlargement or localised solitary swelling of the goitre? Ask the patient to swallow water and look for the movement of the goitre. A goitre and thyroglossal cyst both move with swallowing. Owing to its attachment to the foramen caecum at the base of the tongue, a thyroglossal cyst would also move up with tongue protrusion, whereas a goitre would not. Palpate from behind the patient, with the neck bent slightly to relax the sternocleidomastoid muscles: Determine the goitre’s extent, size, consistency and tenderness (clinically palpable thyroid nodules are generally more than 1 cm in size). Check for regional cervical lymphadenopathy. Palpate for any tracheal deviation, and percuss for any retrosternal extension. Others: Auscultate for bruit over the thyroid, which may be heard in Graves’ disease. Pemberton’s sign may be elicited for patients with suspected retrosternal extension of goitres. It is positive when the patient develops signs of facial plethora, respiratory depression, stridor and distension of neck veins after bilateral upper limb elevation for a minimum of a minute. This implies thoracic inlet obstruction. Clinical signs of thyroid dysfunction Apart from eliciting symptoms from the history, physical examination can also reveal other signs in patients with thyroid hormonal dysfunction. Patients with hyperthyroidism have tremors and diaphoresis. There may also be proximal muscle weakness and hyperreflexia. Patients with hypothyroidism have dry skin with coarse hair, non-pitting oedema and delayed deep tendon reflexes, especially the ankle reflex. Extrathyroidal signs specific to Graves’ disease (eye signs, dermopathy and nail changes) Eye signs indicative of Graves’ disease would include exophthalmos, proptosis (forward displacement of the eyeball), chemosis (conjunctival oedema), lid lag and limited extraocular movement. Clinically significant thyroid eye disease is shown in . Pretibial myxoedema is localised non-pitting thickening and induration of the skin over the lower legs or dorsum of feet in patients with Graves’ disease. Some may also display thyroid nail disease with clubbing of the fingertips, soft tissue swelling or onycholysis.
In a patient who presents with an anterior neck mass, it is prudent to consider other differential diagnoses apart from a goitre . These can be categorised into three main causes. Congenital abnormalities are usually non-tender and slow growing. They remain asymptomatic unless an infection occurs, which can result in a tender mass with fever or discharging sinuses. Inflammatory causes from cervical lymphadenopathy can be attributed to either infective or non-infective causes. Submental cervical lymphadenitis tends to be associated with infections of the lip, floor of the mouth and skin of the cheeks. The referral for imaging and evaluation should be considered in patients with persistent lymphadenopathy despite 6 weeks of monitoring or suspected bacterial lymphadenopathy with worsening symptoms despite initial antibiotic treatment. Neoplastic lesions can be either benign or malignant. Benign masses, which are generally slow growing, include lipomas, epidermal cysts or neuromas. Malignant masses may be due to primary cancers (such as lymphomas or sarcomas) or lymph node metastases from cancers of the head and neck, upper respiratory tract, oesophagus or a distant site.
The following four main questions in history-taking are helpful in distinguishing the causes of the goitre. What are the characteristics of the goitre and its possible triggers? Duration: Has it been present since childhood? Involvement: Is it diffusely enlarged or a solitary nodule? The rate of growth: Is it slow growing or rapidly enlarging? (rapid enlargement may be suggestive of malignancies such as anaplastic thyroid carcinoma or lymphoma) Associated symptoms: (a) Focal pain or fever would be suggestive of thyroiditis. Patients may present with sudden onset of pain with rapid enlargement if spontaneous haemorrhage into a thyroid nodule occurs. Anaplastic thyroid carcinoma may also present as a rapidly enlarging and painful neck mass and (b) ophthalmic symptoms in Graves’ disease include diplopia, blurring of vision, orbital pain or gritty sensation with increased tearing Recent upper respiratory tract infections in up to 2–8 weeks may precipitate subacute thyroiditis Iodine-deficient diets lead to the formation of colloid nodular goitres A personal history of autoimmune diseases (such as myasthenia gravis, Addison’s disease, pernicious anaemia, type 1 diabetes mellitus, rheumatoid arthritis, systemic lupus erythematosus or vitiligo) is associated with autoimmune thyroid diseases such as Graves’ disease and Hashimoto’s thyroiditis. Are there any compressive symptoms? Compressive symptoms due to impingement or displacement of the trachea, oesophagus or great vessels can occur in large goitres or those with retrosternal extension. This is due to the confined space of the thoracic inlet. Tracheal compressions may manifest as dyspnoea, stridor, wheezing or cough. Depending on the severity of compression, symptoms may occur at rest, on exertion or with positional changes. Patients with goitres with intrathoracic extension may experience dyspnoea during manoeuvres that push the thyroid into the thoracic inlet, such as bending forward or lying supine. Hoarseness of voice may be seen in patients with invasion or compression of the recurrent laryngeal nerve, causing transient or permanent vocal cord paralysis. Occasionally, goitres may compress the cervical sympathetic chain, resulting in Horner’s syndrome with a triad of ptosis, miosis and decreased sweating on the ipsilateral side of the face. Patients may have dyspnoea due to phrenic nerve paralysis. Rarely, superior vena cava syndrome manifesting as facial swelling and jugular vein thrombosis may develop. Are there clinical manifestations of hyperthyroidism or hypothyroidism? The functional nature of a goitre affects the differentials which should be considered. Symptoms of hyperthyroidism include palpitations, diarrhoea, weight loss despite increased appetite, heat intolerance, oligomenorrhoea or anxiety. Classical signs such as tremor or hyperactivity may be absent in the elderly with ‘apathetic thyrotoxicosis’, whose predominant symptom may be that of lethargy and weakness. The differential diagnoses to consider in goitres with hyperthyroidism can be found in . Conversely, patients with hypothyroidism may have lethargy, cold intolerance, weight gain, depression, constipation, severe bradycardia, hypothermia or altered sensorium with confusion or obtundation in myxoedema coma. Are there any associated symptoms suggestive of malignancy (including risk factors)? Risk factors of a thyroid malignancy include male gender, age less than 20 years or more than 65 years, history of head and neck radiation, family history of thyroid cancer and multiple endocrine neoplasia type 2 (MEN2). Features suggestive of malignancy include: Rapid growth of the goitre over time, which may suggest anaplastic thyroid carcinoma or lymphoma Hard, single nodule and/or nodules fixed to surrounding structures Hoarseness due to recurrent laryngeal nerve invasion Non-resolving cervical lymphadenopathy Symptoms or signs of distant metastases Symptoms suggestive of thyroid lymphoma, such as fever, weight loss and night sweats.
Duration: Has it been present since childhood? Involvement: Is it diffusely enlarged or a solitary nodule? The rate of growth: Is it slow growing or rapidly enlarging? (rapid enlargement may be suggestive of malignancies such as anaplastic thyroid carcinoma or lymphoma) Associated symptoms: (a) Focal pain or fever would be suggestive of thyroiditis. Patients may present with sudden onset of pain with rapid enlargement if spontaneous haemorrhage into a thyroid nodule occurs. Anaplastic thyroid carcinoma may also present as a rapidly enlarging and painful neck mass and (b) ophthalmic symptoms in Graves’ disease include diplopia, blurring of vision, orbital pain or gritty sensation with increased tearing Recent upper respiratory tract infections in up to 2–8 weeks may precipitate subacute thyroiditis Iodine-deficient diets lead to the formation of colloid nodular goitres A personal history of autoimmune diseases (such as myasthenia gravis, Addison’s disease, pernicious anaemia, type 1 diabetes mellitus, rheumatoid arthritis, systemic lupus erythematosus or vitiligo) is associated with autoimmune thyroid diseases such as Graves’ disease and Hashimoto’s thyroiditis.
Compressive symptoms due to impingement or displacement of the trachea, oesophagus or great vessels can occur in large goitres or those with retrosternal extension. This is due to the confined space of the thoracic inlet. Tracheal compressions may manifest as dyspnoea, stridor, wheezing or cough. Depending on the severity of compression, symptoms may occur at rest, on exertion or with positional changes. Patients with goitres with intrathoracic extension may experience dyspnoea during manoeuvres that push the thyroid into the thoracic inlet, such as bending forward or lying supine. Hoarseness of voice may be seen in patients with invasion or compression of the recurrent laryngeal nerve, causing transient or permanent vocal cord paralysis. Occasionally, goitres may compress the cervical sympathetic chain, resulting in Horner’s syndrome with a triad of ptosis, miosis and decreased sweating on the ipsilateral side of the face. Patients may have dyspnoea due to phrenic nerve paralysis. Rarely, superior vena cava syndrome manifesting as facial swelling and jugular vein thrombosis may develop.
The functional nature of a goitre affects the differentials which should be considered. Symptoms of hyperthyroidism include palpitations, diarrhoea, weight loss despite increased appetite, heat intolerance, oligomenorrhoea or anxiety. Classical signs such as tremor or hyperactivity may be absent in the elderly with ‘apathetic thyrotoxicosis’, whose predominant symptom may be that of lethargy and weakness. The differential diagnoses to consider in goitres with hyperthyroidism can be found in . Conversely, patients with hypothyroidism may have lethargy, cold intolerance, weight gain, depression, constipation, severe bradycardia, hypothermia or altered sensorium with confusion or obtundation in myxoedema coma.
Risk factors of a thyroid malignancy include male gender, age less than 20 years or more than 65 years, history of head and neck radiation, family history of thyroid cancer and multiple endocrine neoplasia type 2 (MEN2). Features suggestive of malignancy include: Rapid growth of the goitre over time, which may suggest anaplastic thyroid carcinoma or lymphoma Hard, single nodule and/or nodules fixed to surrounding structures Hoarseness due to recurrent laryngeal nerve invasion Non-resolving cervical lymphadenopathy Symptoms or signs of distant metastases Symptoms suggestive of thyroid lymphoma, such as fever, weight loss and night sweats.
The physical examination can be grouped under three main categories, examination of the goitre and the surrounding structures, and identifying clinical signs of thyroid dysfunction and extrathyroidal signs specific to Graves’ disease. Examination of the goitre and surrounding neck structures Inspection: Look for scars indicating previous thyroid surgery or injury. Is there a diffuse enlargement or localised solitary swelling of the goitre? Ask the patient to swallow water and look for the movement of the goitre. A goitre and thyroglossal cyst both move with swallowing. Owing to its attachment to the foramen caecum at the base of the tongue, a thyroglossal cyst would also move up with tongue protrusion, whereas a goitre would not. Palpate from behind the patient, with the neck bent slightly to relax the sternocleidomastoid muscles: Determine the goitre’s extent, size, consistency and tenderness (clinically palpable thyroid nodules are generally more than 1 cm in size). Check for regional cervical lymphadenopathy. Palpate for any tracheal deviation, and percuss for any retrosternal extension. Others: Auscultate for bruit over the thyroid, which may be heard in Graves’ disease. Pemberton’s sign may be elicited for patients with suspected retrosternal extension of goitres. It is positive when the patient develops signs of facial plethora, respiratory depression, stridor and distension of neck veins after bilateral upper limb elevation for a minimum of a minute. This implies thoracic inlet obstruction. Clinical signs of thyroid dysfunction Apart from eliciting symptoms from the history, physical examination can also reveal other signs in patients with thyroid hormonal dysfunction. Patients with hyperthyroidism have tremors and diaphoresis. There may also be proximal muscle weakness and hyperreflexia. Patients with hypothyroidism have dry skin with coarse hair, non-pitting oedema and delayed deep tendon reflexes, especially the ankle reflex. Extrathyroidal signs specific to Graves’ disease (eye signs, dermopathy and nail changes) Eye signs indicative of Graves’ disease would include exophthalmos, proptosis (forward displacement of the eyeball), chemosis (conjunctival oedema), lid lag and limited extraocular movement. Clinically significant thyroid eye disease is shown in . Pretibial myxoedema is localised non-pitting thickening and induration of the skin over the lower legs or dorsum of feet in patients with Graves’ disease. Some may also display thyroid nail disease with clubbing of the fingertips, soft tissue swelling or onycholysis.
Inspection: Look for scars indicating previous thyroid surgery or injury. Is there a diffuse enlargement or localised solitary swelling of the goitre? Ask the patient to swallow water and look for the movement of the goitre. A goitre and thyroglossal cyst both move with swallowing. Owing to its attachment to the foramen caecum at the base of the tongue, a thyroglossal cyst would also move up with tongue protrusion, whereas a goitre would not. Palpate from behind the patient, with the neck bent slightly to relax the sternocleidomastoid muscles: Determine the goitre’s extent, size, consistency and tenderness (clinically palpable thyroid nodules are generally more than 1 cm in size). Check for regional cervical lymphadenopathy. Palpate for any tracheal deviation, and percuss for any retrosternal extension. Others: Auscultate for bruit over the thyroid, which may be heard in Graves’ disease. Pemberton’s sign may be elicited for patients with suspected retrosternal extension of goitres. It is positive when the patient develops signs of facial plethora, respiratory depression, stridor and distension of neck veins after bilateral upper limb elevation for a minimum of a minute. This implies thoracic inlet obstruction.
Apart from eliciting symptoms from the history, physical examination can also reveal other signs in patients with thyroid hormonal dysfunction. Patients with hyperthyroidism have tremors and diaphoresis. There may also be proximal muscle weakness and hyperreflexia. Patients with hypothyroidism have dry skin with coarse hair, non-pitting oedema and delayed deep tendon reflexes, especially the ankle reflex.
Eye signs indicative of Graves’ disease would include exophthalmos, proptosis (forward displacement of the eyeball), chemosis (conjunctival oedema), lid lag and limited extraocular movement. Clinically significant thyroid eye disease is shown in . Pretibial myxoedema is localised non-pitting thickening and induration of the skin over the lower legs or dorsum of feet in patients with Graves’ disease. Some may also display thyroid nail disease with clubbing of the fingertips, soft tissue swelling or onycholysis.
The nature of the thyroid enlargement is a key determinant of the goitre’s possible aetiology. Therefore, the initial clinical examination should focus on differentiating a diffusely enlarged goitre from a solitary thyroid nodule. Diffusely enlarged goitre Multinodular goitre Multinodular goitre is a nodular enlargement of the thyroid gland in the absence of autoimmune thyroid disease, cancer or underlying inflammation. It is the most common thyroid disorder, and is more common in women, with a female-to-male ratio of 13:1. As iodine deficiency contributes to the formation of multinodular goitre, the incidence of this condition is higher in iodine-deplete areas. The diagnosis is based on physical examination and ultrasonography. Compressive symptoms may occur in some patients. Thyroid function can be normal or consist of hyperfunctioning nodules, termed ‘toxic’ multinodular goitre. Thyroid autoantibodies are usually absent or low. Imaging for suspicious nodules and the extent of the goitre can be performed, with corresponding fine-needle aspiration cytology (FNAC), as indicated. Management of the multinodular goitre depends on its size, symptoms and the patient’s preferences. Thyroidectomy is indicated for patients with concomitant thyroid malignancy, compressive symptoms and large nodules of more than 4 cm. This is due to higher risks of malignancy and increased false-negative rates in these large goitres during FNAC. Radioactive iodine (RAI) ablation therapy is an alternative if the above indications are absent and the patient is hyperthyroid. A small, asymptomatic and benign multinodular goitre can be expectantly managed with regular monitoring using ultrasonography and serum thyroid function tests. Autoimmune causes (Graves’ disease and Hashimoto’s thyroiditis) Autoimmune thyroid diseases are caused by the body’s immune response to specific thyroid antigens. Graves’ disease is an autoimmune disease caused by thyrotropin receptor antibodies (TRAb). It is the most common cause of hyperthyroidism. Smoking can result in a twofold increase in the risk of its development. The incidence is eight times greater in women than in men, and it commonly affects women between 30 and 60 years of age. Graves’ disease is classically associated with a diffuse goitre, hyperthyroidism with or without Graves’ ophthalmopathy. Rarely, patients may have pretibial myxoedema or acropachy. Hyperthyroidism is due to autoantibody-induced activation of thyroid-stimulating hormone (TSH) receptors, which increases thyroid hormone secretion. A positive TRAb found on blood tests would further support the diagnosis of Graves’ disease. Thyrotoxicosis, if left untreated, may result in a thyroid storm, congestive cardiac failure and dangerous arrhythmias such as atrial fibrillation and cardiovascular collapse. Hashimoto’s thyroiditis, also known as chronic autoimmune thyroiditis, occurs because of autoimmune-mediated obliteration of the normal thyroid gland due to lymphocytic infiltration, fibrosis and loss of follicular epithelium. The peak incidence of Hashimoto’s thyroiditis occurs in women aged 30–50 years, who often present with painless and diffuse enlargement of the thyroid gland. Some are euthyroid initially but may progress to hypothyroidism. Diagnosis was further supported with positive anti-thyroid peroxidase antibodies (TPOAb) or anti-thyroglobulin antibodies (TgAb). Subacute thyroiditis Subacute thyroiditis, also known as de Quervain’s thyroiditis, presents with a tender diffuse goitre and may be associated with fatigue, fever or pharyngitis. It is often preceded by an upper respiratory viral infection. Thyroid dysfunction may occur in stages for some patients. Initial thyrotoxicosis lasting 3–6 weeks, owing to the destruction of thyroid follicles, can occur in around 50% of patients. Subsequent progression to hypothyroidism (which may last for up to 6 months) is observed in 30% of patients. Patients eventually revert to the euthyroid stage with a resolution of the goitre approximately a year after onset. Initial investigations reveal leucocytosis, elevation of serum erythrocyte sedimentation rate (ESR) and no uptake on a radionuclide thyroid uptake scan. Treatment in symptomatic individuals is usually with non-steroidal inflammatory agents and beta blockers. Anti-thyroid drugs are not required owing to the risk of subsequent hypothyroidism. Solitary thyroid nodule A thyroid nodule is a localised lesion that appears distinct from the surrounding thyroid gland during palpation or on ultrasonography. It may present as a solitary thyroid nodule in a normal thyroid gland or as a dominant thyroid nodule in a diffuse or multinodular goitre. Dominant nodule of multinodular goitre (euthyroid/toxic) More than 50% of patients with a clinically palpable solitary nodule were eventually found to have multiple nodules on ultrasonography. Although most multinodular goitres are euthyroid, some with large hyperfunctioning nodules may develop hyperthyroidism. As patients with multinodular goitres have the same incidence of malignancy transformation as those with solitary thyroid nodules, they should be evaluated using a similar approach. Ultrasonography should be performed to evaluate each nodule within a multinodular goitre instead of focusing only on the dominant nodule, to avoid missing a possible underlying malignancy. Thyroid cyst Thyroid cysts are discrete hypoechoic areas observed on ultrasonography, as they are mostly fluid-filled. True simple cysts are benign but tend to be rare and are found in only 1% of nodules. Most thyroid cysts have mixed solid components, with areas of cystic degeneration. Most cysts are degenerating thyroid adenomas. A higher proportion of cystic components in a nodule indicate a lower possibility of malignancy. FNAC can be performed on mixed cystic-solid lesions that are 2 cm or larger, or lesions with suspicious ultrasonography features . Aspiration may also relieve compressive symptoms by large cystic nodules. Purely cystic nodules do not need to be biopsied. Surgical excision is indicated in benign symptomatic cysts that re-accumulate despite recurrent aspirations or if there are suspicious features in a mixed cystic-solid thyroid lesion, in cases where benign cytology cannot be obtained. Simple/colloid goitre Colloid goitres, also known as ‘simple goitres’, are benign lesions. They consist of colloid, which is an acellular glycoprotein where thyroid hormones are stored. Approximately 60%–70% of thyroid nodules are colloid nodules. Ultrasonography often reveals comet-tail artefacts, which are due to reverberation echoes between two surfaces. FNAC may be required if there are further suspicious features. Patients with colloid nodules are followed up every 6 months or yearly with repeat ultrasonography. Surgical excision is performed only if these goitres are complicated by compressive symptoms. Neoplasm (adenoma and malignancy) Neoplastic thyroid nodules can be divided into benign or malignant. Follicular adenomas are benign lesions. Thyroid nodules with FNAC showing follicular cells with atypia require surgery to exclude capsular and vessel invasion observed in follicular cancer. If histology results confirm follicular adenoma with organised follicular cells, a hemi-thyroidectomy would suffice and no further follow-up is required. Thyroid malignancies are classified into three main types: differentiated cancers (papillary or follicular cancers, which account for 90%–95% of cancers), medullary cancers (which account for 6% of cancers) and undifferentiated cancers (anaplastic cancers, which account for less than 1% of cancers). Management differs according to the type of malignancy. In differentiated cancers, surgical thyroidectomy is the mainstay of treatment along with neck dissection in patients who have cervical lymph node metastasis (common in papillary thyroid cancer), with consideration of RAI ablation and subsequent TSH suppression in certain candidates who have a high risk of recurrence. Well-differentiated thyroid cancer generally has a good prognosis after completion of treatment. Medullary cancers require further evaluation for concomitant hyperparathyroidism and pheochromocytoma before surgery, as they are associated with MEN2. Anaplastic cancers tend to be locally advanced or have distal metastases at the time of diagnosis, given their fast and aggressive course, with a mortality rate of nearly 100%. Therefore, the role of surgery is limited in this group of patients, and palliation is often required.
Multinodular goitre Multinodular goitre is a nodular enlargement of the thyroid gland in the absence of autoimmune thyroid disease, cancer or underlying inflammation. It is the most common thyroid disorder, and is more common in women, with a female-to-male ratio of 13:1. As iodine deficiency contributes to the formation of multinodular goitre, the incidence of this condition is higher in iodine-deplete areas. The diagnosis is based on physical examination and ultrasonography. Compressive symptoms may occur in some patients. Thyroid function can be normal or consist of hyperfunctioning nodules, termed ‘toxic’ multinodular goitre. Thyroid autoantibodies are usually absent or low. Imaging for suspicious nodules and the extent of the goitre can be performed, with corresponding fine-needle aspiration cytology (FNAC), as indicated. Management of the multinodular goitre depends on its size, symptoms and the patient’s preferences. Thyroidectomy is indicated for patients with concomitant thyroid malignancy, compressive symptoms and large nodules of more than 4 cm. This is due to higher risks of malignancy and increased false-negative rates in these large goitres during FNAC. Radioactive iodine (RAI) ablation therapy is an alternative if the above indications are absent and the patient is hyperthyroid. A small, asymptomatic and benign multinodular goitre can be expectantly managed with regular monitoring using ultrasonography and serum thyroid function tests. Autoimmune causes (Graves’ disease and Hashimoto’s thyroiditis) Autoimmune thyroid diseases are caused by the body’s immune response to specific thyroid antigens. Graves’ disease is an autoimmune disease caused by thyrotropin receptor antibodies (TRAb). It is the most common cause of hyperthyroidism. Smoking can result in a twofold increase in the risk of its development. The incidence is eight times greater in women than in men, and it commonly affects women between 30 and 60 years of age. Graves’ disease is classically associated with a diffuse goitre, hyperthyroidism with or without Graves’ ophthalmopathy. Rarely, patients may have pretibial myxoedema or acropachy. Hyperthyroidism is due to autoantibody-induced activation of thyroid-stimulating hormone (TSH) receptors, which increases thyroid hormone secretion. A positive TRAb found on blood tests would further support the diagnosis of Graves’ disease. Thyrotoxicosis, if left untreated, may result in a thyroid storm, congestive cardiac failure and dangerous arrhythmias such as atrial fibrillation and cardiovascular collapse. Hashimoto’s thyroiditis, also known as chronic autoimmune thyroiditis, occurs because of autoimmune-mediated obliteration of the normal thyroid gland due to lymphocytic infiltration, fibrosis and loss of follicular epithelium. The peak incidence of Hashimoto’s thyroiditis occurs in women aged 30–50 years, who often present with painless and diffuse enlargement of the thyroid gland. Some are euthyroid initially but may progress to hypothyroidism. Diagnosis was further supported with positive anti-thyroid peroxidase antibodies (TPOAb) or anti-thyroglobulin antibodies (TgAb). Subacute thyroiditis Subacute thyroiditis, also known as de Quervain’s thyroiditis, presents with a tender diffuse goitre and may be associated with fatigue, fever or pharyngitis. It is often preceded by an upper respiratory viral infection. Thyroid dysfunction may occur in stages for some patients. Initial thyrotoxicosis lasting 3–6 weeks, owing to the destruction of thyroid follicles, can occur in around 50% of patients. Subsequent progression to hypothyroidism (which may last for up to 6 months) is observed in 30% of patients. Patients eventually revert to the euthyroid stage with a resolution of the goitre approximately a year after onset. Initial investigations reveal leucocytosis, elevation of serum erythrocyte sedimentation rate (ESR) and no uptake on a radionuclide thyroid uptake scan. Treatment in symptomatic individuals is usually with non-steroidal inflammatory agents and beta blockers. Anti-thyroid drugs are not required owing to the risk of subsequent hypothyroidism.
Multinodular goitre is a nodular enlargement of the thyroid gland in the absence of autoimmune thyroid disease, cancer or underlying inflammation. It is the most common thyroid disorder, and is more common in women, with a female-to-male ratio of 13:1. As iodine deficiency contributes to the formation of multinodular goitre, the incidence of this condition is higher in iodine-deplete areas. The diagnosis is based on physical examination and ultrasonography. Compressive symptoms may occur in some patients. Thyroid function can be normal or consist of hyperfunctioning nodules, termed ‘toxic’ multinodular goitre. Thyroid autoantibodies are usually absent or low. Imaging for suspicious nodules and the extent of the goitre can be performed, with corresponding fine-needle aspiration cytology (FNAC), as indicated. Management of the multinodular goitre depends on its size, symptoms and the patient’s preferences. Thyroidectomy is indicated for patients with concomitant thyroid malignancy, compressive symptoms and large nodules of more than 4 cm. This is due to higher risks of malignancy and increased false-negative rates in these large goitres during FNAC. Radioactive iodine (RAI) ablation therapy is an alternative if the above indications are absent and the patient is hyperthyroid. A small, asymptomatic and benign multinodular goitre can be expectantly managed with regular monitoring using ultrasonography and serum thyroid function tests.
Autoimmune thyroid diseases are caused by the body’s immune response to specific thyroid antigens. Graves’ disease is an autoimmune disease caused by thyrotropin receptor antibodies (TRAb). It is the most common cause of hyperthyroidism. Smoking can result in a twofold increase in the risk of its development. The incidence is eight times greater in women than in men, and it commonly affects women between 30 and 60 years of age. Graves’ disease is classically associated with a diffuse goitre, hyperthyroidism with or without Graves’ ophthalmopathy. Rarely, patients may have pretibial myxoedema or acropachy. Hyperthyroidism is due to autoantibody-induced activation of thyroid-stimulating hormone (TSH) receptors, which increases thyroid hormone secretion. A positive TRAb found on blood tests would further support the diagnosis of Graves’ disease. Thyrotoxicosis, if left untreated, may result in a thyroid storm, congestive cardiac failure and dangerous arrhythmias such as atrial fibrillation and cardiovascular collapse. Hashimoto’s thyroiditis, also known as chronic autoimmune thyroiditis, occurs because of autoimmune-mediated obliteration of the normal thyroid gland due to lymphocytic infiltration, fibrosis and loss of follicular epithelium. The peak incidence of Hashimoto’s thyroiditis occurs in women aged 30–50 years, who often present with painless and diffuse enlargement of the thyroid gland. Some are euthyroid initially but may progress to hypothyroidism. Diagnosis was further supported with positive anti-thyroid peroxidase antibodies (TPOAb) or anti-thyroglobulin antibodies (TgAb).
Subacute thyroiditis, also known as de Quervain’s thyroiditis, presents with a tender diffuse goitre and may be associated with fatigue, fever or pharyngitis. It is often preceded by an upper respiratory viral infection. Thyroid dysfunction may occur in stages for some patients. Initial thyrotoxicosis lasting 3–6 weeks, owing to the destruction of thyroid follicles, can occur in around 50% of patients. Subsequent progression to hypothyroidism (which may last for up to 6 months) is observed in 30% of patients. Patients eventually revert to the euthyroid stage with a resolution of the goitre approximately a year after onset. Initial investigations reveal leucocytosis, elevation of serum erythrocyte sedimentation rate (ESR) and no uptake on a radionuclide thyroid uptake scan. Treatment in symptomatic individuals is usually with non-steroidal inflammatory agents and beta blockers. Anti-thyroid drugs are not required owing to the risk of subsequent hypothyroidism.
A thyroid nodule is a localised lesion that appears distinct from the surrounding thyroid gland during palpation or on ultrasonography. It may present as a solitary thyroid nodule in a normal thyroid gland or as a dominant thyroid nodule in a diffuse or multinodular goitre. Dominant nodule of multinodular goitre (euthyroid/toxic) More than 50% of patients with a clinically palpable solitary nodule were eventually found to have multiple nodules on ultrasonography. Although most multinodular goitres are euthyroid, some with large hyperfunctioning nodules may develop hyperthyroidism. As patients with multinodular goitres have the same incidence of malignancy transformation as those with solitary thyroid nodules, they should be evaluated using a similar approach. Ultrasonography should be performed to evaluate each nodule within a multinodular goitre instead of focusing only on the dominant nodule, to avoid missing a possible underlying malignancy. Thyroid cyst Thyroid cysts are discrete hypoechoic areas observed on ultrasonography, as they are mostly fluid-filled. True simple cysts are benign but tend to be rare and are found in only 1% of nodules. Most thyroid cysts have mixed solid components, with areas of cystic degeneration. Most cysts are degenerating thyroid adenomas. A higher proportion of cystic components in a nodule indicate a lower possibility of malignancy. FNAC can be performed on mixed cystic-solid lesions that are 2 cm or larger, or lesions with suspicious ultrasonography features . Aspiration may also relieve compressive symptoms by large cystic nodules. Purely cystic nodules do not need to be biopsied. Surgical excision is indicated in benign symptomatic cysts that re-accumulate despite recurrent aspirations or if there are suspicious features in a mixed cystic-solid thyroid lesion, in cases where benign cytology cannot be obtained. Simple/colloid goitre Colloid goitres, also known as ‘simple goitres’, are benign lesions. They consist of colloid, which is an acellular glycoprotein where thyroid hormones are stored. Approximately 60%–70% of thyroid nodules are colloid nodules. Ultrasonography often reveals comet-tail artefacts, which are due to reverberation echoes between two surfaces. FNAC may be required if there are further suspicious features. Patients with colloid nodules are followed up every 6 months or yearly with repeat ultrasonography. Surgical excision is performed only if these goitres are complicated by compressive symptoms. Neoplasm (adenoma and malignancy) Neoplastic thyroid nodules can be divided into benign or malignant. Follicular adenomas are benign lesions. Thyroid nodules with FNAC showing follicular cells with atypia require surgery to exclude capsular and vessel invasion observed in follicular cancer. If histology results confirm follicular adenoma with organised follicular cells, a hemi-thyroidectomy would suffice and no further follow-up is required. Thyroid malignancies are classified into three main types: differentiated cancers (papillary or follicular cancers, which account for 90%–95% of cancers), medullary cancers (which account for 6% of cancers) and undifferentiated cancers (anaplastic cancers, which account for less than 1% of cancers). Management differs according to the type of malignancy. In differentiated cancers, surgical thyroidectomy is the mainstay of treatment along with neck dissection in patients who have cervical lymph node metastasis (common in papillary thyroid cancer), with consideration of RAI ablation and subsequent TSH suppression in certain candidates who have a high risk of recurrence. Well-differentiated thyroid cancer generally has a good prognosis after completion of treatment. Medullary cancers require further evaluation for concomitant hyperparathyroidism and pheochromocytoma before surgery, as they are associated with MEN2. Anaplastic cancers tend to be locally advanced or have distal metastases at the time of diagnosis, given their fast and aggressive course, with a mortality rate of nearly 100%. Therefore, the role of surgery is limited in this group of patients, and palliation is often required.
More than 50% of patients with a clinically palpable solitary nodule were eventually found to have multiple nodules on ultrasonography. Although most multinodular goitres are euthyroid, some with large hyperfunctioning nodules may develop hyperthyroidism. As patients with multinodular goitres have the same incidence of malignancy transformation as those with solitary thyroid nodules, they should be evaluated using a similar approach. Ultrasonography should be performed to evaluate each nodule within a multinodular goitre instead of focusing only on the dominant nodule, to avoid missing a possible underlying malignancy.
Thyroid cysts are discrete hypoechoic areas observed on ultrasonography, as they are mostly fluid-filled. True simple cysts are benign but tend to be rare and are found in only 1% of nodules. Most thyroid cysts have mixed solid components, with areas of cystic degeneration. Most cysts are degenerating thyroid adenomas. A higher proportion of cystic components in a nodule indicate a lower possibility of malignancy. FNAC can be performed on mixed cystic-solid lesions that are 2 cm or larger, or lesions with suspicious ultrasonography features . Aspiration may also relieve compressive symptoms by large cystic nodules. Purely cystic nodules do not need to be biopsied. Surgical excision is indicated in benign symptomatic cysts that re-accumulate despite recurrent aspirations or if there are suspicious features in a mixed cystic-solid thyroid lesion, in cases where benign cytology cannot be obtained.
Colloid goitres, also known as ‘simple goitres’, are benign lesions. They consist of colloid, which is an acellular glycoprotein where thyroid hormones are stored. Approximately 60%–70% of thyroid nodules are colloid nodules. Ultrasonography often reveals comet-tail artefacts, which are due to reverberation echoes between two surfaces. FNAC may be required if there are further suspicious features. Patients with colloid nodules are followed up every 6 months or yearly with repeat ultrasonography. Surgical excision is performed only if these goitres are complicated by compressive symptoms.
Neoplastic thyroid nodules can be divided into benign or malignant. Follicular adenomas are benign lesions. Thyroid nodules with FNAC showing follicular cells with atypia require surgery to exclude capsular and vessel invasion observed in follicular cancer. If histology results confirm follicular adenoma with organised follicular cells, a hemi-thyroidectomy would suffice and no further follow-up is required. Thyroid malignancies are classified into three main types: differentiated cancers (papillary or follicular cancers, which account for 90%–95% of cancers), medullary cancers (which account for 6% of cancers) and undifferentiated cancers (anaplastic cancers, which account for less than 1% of cancers). Management differs according to the type of malignancy. In differentiated cancers, surgical thyroidectomy is the mainstay of treatment along with neck dissection in patients who have cervical lymph node metastasis (common in papillary thyroid cancer), with consideration of RAI ablation and subsequent TSH suppression in certain candidates who have a high risk of recurrence. Well-differentiated thyroid cancer generally has a good prognosis after completion of treatment. Medullary cancers require further evaluation for concomitant hyperparathyroidism and pheochromocytoma before surgery, as they are associated with MEN2. Anaplastic cancers tend to be locally advanced or have distal metastases at the time of diagnosis, given their fast and aggressive course, with a mortality rate of nearly 100%. Therefore, the role of surgery is limited in this group of patients, and palliation is often required.
In primary care After a thorough history-taking and physical examination, performing a serum thyroid function test is a useful first-line investigation to provide more clues to the aetiology and subsequent management. Serum TSH levels allow initial determination of the patient’s thyroid function. Low TSH and high free thyroxine (fT4) levels are consistent with a hyperfunctioning goitre. In such cases, performing a serum TRAb test is helpful to support the diagnosis of Graves’ disease, as opposed to toxic multinodular goitres. This is because up to 90% of patients with Graves’ disease have positive TRAb, whereas it is usually negative or very low in toxic multinodular goitres and subacute thyroiditis. High TSH and low fT4 levels indicate hypothyroidism. High levels of TPOAb or TgAb support the diagnosis of Hashimoto’s thyroiditis. The sensitivity of TPOAb is more than 90% for Hashimoto’s thyroiditis and is further increased to 97% if both TPOAb and TgAb are measured. Serum TSH that is elevated or within the upper limit of normal has been associated with an increased risk of malignancy within a thyroid nodule; hence, further evaluation with ultrasonography is advised. Other tests that may be useful for hyperthyroid patients include serum ESR, which may be elevated in patients with subacute thyroiditis. In tertiary care Imaging Ultrasonography of the thyroid gland is recommended in all patients presenting with goitres, as part of the initial investigation. Apart from detecting nodules that are not palpable, it can characterise the goitre and look for suspicious features of malignancy that require FNAC. shows an ultrasonography image of a papillary thyroid cancer. Fine-needle aspiration cytology FNAC is efficient, safe, cost-effective and the gold standard for cytological diagnosis of suspicious thyroid nodules on ultrasonography to rule out malignancy. It is not required for purely cystic thyroid nodules. The results of FNAC are reported using the six categories of the Bethesda System for Reporting Thyroid Cytopathology, which estimates the corresponding risk of malignancy. Subsequent management will then be based on the category. The six categories are: (a) unsatisfactory, (b) benign, (c) atypia, (d) follicular neoplasm (which may require excision to truly differentiate between a benign follicular adenoma from a follicular carcinoma), (e) suspicious for malignancy and (f) malignant.
After a thorough history-taking and physical examination, performing a serum thyroid function test is a useful first-line investigation to provide more clues to the aetiology and subsequent management. Serum TSH levels allow initial determination of the patient’s thyroid function. Low TSH and high free thyroxine (fT4) levels are consistent with a hyperfunctioning goitre. In such cases, performing a serum TRAb test is helpful to support the diagnosis of Graves’ disease, as opposed to toxic multinodular goitres. This is because up to 90% of patients with Graves’ disease have positive TRAb, whereas it is usually negative or very low in toxic multinodular goitres and subacute thyroiditis. High TSH and low fT4 levels indicate hypothyroidism. High levels of TPOAb or TgAb support the diagnosis of Hashimoto’s thyroiditis. The sensitivity of TPOAb is more than 90% for Hashimoto’s thyroiditis and is further increased to 97% if both TPOAb and TgAb are measured. Serum TSH that is elevated or within the upper limit of normal has been associated with an increased risk of malignancy within a thyroid nodule; hence, further evaluation with ultrasonography is advised. Other tests that may be useful for hyperthyroid patients include serum ESR, which may be elevated in patients with subacute thyroiditis.
Imaging Ultrasonography of the thyroid gland is recommended in all patients presenting with goitres, as part of the initial investigation. Apart from detecting nodules that are not palpable, it can characterise the goitre and look for suspicious features of malignancy that require FNAC. shows an ultrasonography image of a papillary thyroid cancer. Fine-needle aspiration cytology FNAC is efficient, safe, cost-effective and the gold standard for cytological diagnosis of suspicious thyroid nodules on ultrasonography to rule out malignancy. It is not required for purely cystic thyroid nodules. The results of FNAC are reported using the six categories of the Bethesda System for Reporting Thyroid Cytopathology, which estimates the corresponding risk of malignancy. Subsequent management will then be based on the category. The six categories are: (a) unsatisfactory, (b) benign, (c) atypia, (d) follicular neoplasm (which may require excision to truly differentiate between a benign follicular adenoma from a follicular carcinoma), (e) suspicious for malignancy and (f) malignant.
Ultrasonography of the thyroid gland is recommended in all patients presenting with goitres, as part of the initial investigation. Apart from detecting nodules that are not palpable, it can characterise the goitre and look for suspicious features of malignancy that require FNAC. shows an ultrasonography image of a papillary thyroid cancer.
FNAC is efficient, safe, cost-effective and the gold standard for cytological diagnosis of suspicious thyroid nodules on ultrasonography to rule out malignancy. It is not required for purely cystic thyroid nodules. The results of FNAC are reported using the six categories of the Bethesda System for Reporting Thyroid Cytopathology, which estimates the corresponding risk of malignancy. Subsequent management will then be based on the category. The six categories are: (a) unsatisfactory, (b) benign, (c) atypia, (d) follicular neoplasm (which may require excision to truly differentiate between a benign follicular adenoma from a follicular carcinoma), (e) suspicious for malignancy and (f) malignant.
Management of a goitre should be directed at its cause, associated thyroid dysfunction and compressive symptoms. Graves’ disease On initial review, patients may present with palpitations due to thyrotoxicosis. Oral beta blockers such as propranolol may be useful in controlling tachycardia and relieving symptoms of tremors. A relative beta-1 selective beta blocker such as atenolol may be used with close monitoring in patients with well-controlled asthma. In patients with contraindications such as uncontrolled asthma, an alternative would be calcium channel blockers such as diltiazem. The treatment of Graves’ disease depends on patient factors (such as age, pregnancy or women who are yet to conceive) and goitre factors (such as a huge retrosternal goitre or high suspicion of malignancy). There are three main treatment options: (a) anti-thyroid drugs that block thyroid hormone synthesis, (b) RAI that ablates the thyroid gland and (c) thyroidectomy. The principles of treatment are summarised in . Anti-thyroid drugs (thionamides) are the preferred initial therapy for most patients, including older patients with limited life expectancy. Because of the higher risk of hepatotoxicity with propylthiouracil, thiamazoles such as carbimazole should be the recommended first-line drug, except for women in the first trimester of pregnancy. This is because thiamazoles have been associated with severe birth defects (such as aplasia cutis, choanal and oesophageal atresia) in up to 4% of patients, as compared to the risk of minor birth defects (including pre-auricular sinuses and neck cysts) found in about 2% of patients on propylthiouracil. Patients also must be counselled on the risks of agranulocytosis while on anti-thyroid drug treatment and be advised to seek medical attention if they develop fever, sore throat or oral ulcers. The incidence of agranulocytosis has been found in 0.1%–0.3% of patients on anti-thyroid drugs. Prior to initiation of anti-thyroid drugs, a baseline full blood count and liver function tests are recommended. There are currently no consensus recommendations for routine monitoring of differential white blood cell counts or liver function tests in patients taking anti-thyroid drugs, unless they develop a fever with pharyngitis or symptoms indicative of hepatotoxicity (such as pruritic rash, jaundice, pale stools, dark urine, joint pains, abdominal pain, anorexia or fatigue). Minor, non-vasculitic skin reactions (such as pruritus and transient rashes) can be treated with antihistamines without cessation of anti-thyroid drugs. In cases with persistent minor side effects, switching between types of anti-thyroid drugs and exploring alternatives of RAI ablation or surgery can be considered. The initial dose would depend on the clinical severity of the hyperthyroidism, fT4 level and size of the goitre. Patients with larger goitres and higher levels of fT4 would require a higher dose (such as carbimazole 30 mg every morning). A dose ratio of carbimazole to propylthiouracil in divided doses of 1:10 is recommended when switching from one drug to another. The dose is titrated every 4–6 weeks based on clinical symptoms and the initial level of fT4, as the normalisation of TSH usually lags by 3–6 months. The dose can be halved when the fT4 value halves. Hence, these patients would need to be on chronic follow-up with a family physician, with periodic serum thyroid function tests. Therapy should be continued and weaned, if possible, over 18 months. Remission is defined as normal serum TSH, fT4 and triiodothyronine (T3) for one year after discontinuation of anti-thyroid drugs, and can be achieved in up to 50%–60% of patients after treatment for 2 years. Further prolonged treatment beyond 18 months has not been shown to improve the remission rate. Remission is more likely in young patients with smaller goitres, mild hyperthyroidism, lower T3 levels and those with low TRAb. For patients in whom medical treatment has failed or is contraindicated and those who are not keen on surgery, RAI therapy is another effective option. Contraindications would include women who are considering conception within 6 months, pregnant or lactating women, and those with active Graves’ ophthalmopathy or uncontrolled hyperthyroidism. Patients are expected to develop hypothyroidism after treatment and need lifelong thyroxine replacement. Suitable patients can be referred to nuclear medicine specialists for RAI therapy. This involves administrating RAI via an odourless and colourless oral tablet. Patients must be euthyroid before referral. They would then have to avoid contact with pregnant women and young children for a week and avoid conception for 6 months. Thyroid surgery can be considered in patients whose medical treatment has failed, who have contraindications for RAI therapy, or who have large goitres causing compressive symptoms or goitres with malignant features. The care of patients after thyroidectomy is further discussed later in this article. Patients with thyroid eye disease would benefit from an early referral to ophthalmologists for further assessment. Hashimoto’s thyroiditis Thyroid hormone replacement is indicated in patients with hypothyroidism due to Hashimoto’s thyroiditis. The dose of oral levothyroxine that is initiated depends on the initial serum TSH level, age, comorbidities and body weight. In a young and healthy patients with elevated serum TSH, the full replacement dose based on a body weight of 1.6 mcg/kg can be initiated. In elderly patients with mildly elevated serum TSH or those with known coronary artery disease, lower doses of levothyroxine (e.g., 12.5–25.0 mcg daily) can be started, with gradual titration and close monitoring of response and tolerance. To prevent the reduction of absorption, oral levothyroxine should be taken 4 hours apart from other medications such as iron or calcium supplements. Dose adjustments are usually made 4–6 weeks after initiation. Overall, the aim is for serum TSH to be near the lower limit of normal. After the target serum TSH level is reached, patients can be reviewed every 6 months or annually. Most patients will be on indefinite treatment with thyroxine. Surgery is rarely indicated, unless there are significant compressive symptoms despite medical treatment. Post thyroidectomy General practitioners may also have to care for patients who have had a thyroidectomy, which may be a total thyroidectomy or hemi-thyroidectomy (removal of a lobe of the thyroid gland). The management would differ depending on the initial indication for the surgery (benign or malignant). Thyroid hormone replacement Patients who have had a total thyroidectomy for benign goitres (e.g. Graves’ disease and multinodular goitres) need long-term thyroxine replacement and periodic clinical review with serum thyroid function test. Those who undergo total thyroidectomy for thyroid malignancy are first treated with RAI therapy to ablate residual thyroid tissues, particularly if they have extensive extrathyroidal spread or metastasis (as discussed later in this article). After RAI therapy, patients receive thyroxine replacement at supraphysiological doses to achieve TSH suppression, which reduces the risk of recurrence. The degree of suppression depends on the risk of recurrence of thyroid cancer and must be balanced with the potential side effects from subclinical hyperthyroidism. These include angina in patients with ischaemic heart disease, increased risk of atrial fibrillation and risk of osteoporosis in both genders, especially the elderly and postmenopausal women. Radioactive iodine ablation RAI therapy is used to ablate the remaining thyroid tissue after surgery for better surveillance of cancer recurrence in the future or as adjunctive therapy for local or distant metastatic thyroid cancer. Patients are advised to be on a low iodine diet for 2–3 weeks before treatment to ensure iodine depletion from cells for effective RAI therapy. Levothyroxine is withdrawn before treatment to raise TSH and improve radioiodine uptake. After treatment, patients can expose others to radiation emitting from their bodies or through bodily fluids. They should avoid sharing utensils, sleeping in the same bed with others, sexual contact and close contact with children and pregnant women for some time after treatment. The duration depends on the dose of RAI therapy administered, ranging from 1 day to 3 weeks. Pregnancy should be delayed to 6 months after RAI therapy to ensure that any thyroid dysfunction is adequately treated. For men, the conception should be delayed for about 4 months. Postoperative hypoparathyroidism Transient or permanent postoperative hypoparathyroidism may occur if there is disruption of the blood supply to the parathyroid glands during thyroidectomy. This can result in hypocalcaemia (which may manifest as circumoral numbness, tingling in the fingers and toes, Chvostek’s sign and carpopedal spasm) and hyperphosphataemia. This may be permanent if it is persistent for more than 6 months after surgery. Patients with permanent hypoparathyroidism are treated with oral calcium replacement with activated vitamin D (calcitriol) to facilitate the absorption of calcium and with adequate magnesium replacement. Calcium targets should be within the lower limit of normal (often in the range of 2.00–2.20 mmol/L) to prevent resultant hypercalciuria from the complete correction of hypocalcaemia in these patients.
On initial review, patients may present with palpitations due to thyrotoxicosis. Oral beta blockers such as propranolol may be useful in controlling tachycardia and relieving symptoms of tremors. A relative beta-1 selective beta blocker such as atenolol may be used with close monitoring in patients with well-controlled asthma. In patients with contraindications such as uncontrolled asthma, an alternative would be calcium channel blockers such as diltiazem. The treatment of Graves’ disease depends on patient factors (such as age, pregnancy or women who are yet to conceive) and goitre factors (such as a huge retrosternal goitre or high suspicion of malignancy). There are three main treatment options: (a) anti-thyroid drugs that block thyroid hormone synthesis, (b) RAI that ablates the thyroid gland and (c) thyroidectomy. The principles of treatment are summarised in . Anti-thyroid drugs (thionamides) are the preferred initial therapy for most patients, including older patients with limited life expectancy. Because of the higher risk of hepatotoxicity with propylthiouracil, thiamazoles such as carbimazole should be the recommended first-line drug, except for women in the first trimester of pregnancy. This is because thiamazoles have been associated with severe birth defects (such as aplasia cutis, choanal and oesophageal atresia) in up to 4% of patients, as compared to the risk of minor birth defects (including pre-auricular sinuses and neck cysts) found in about 2% of patients on propylthiouracil. Patients also must be counselled on the risks of agranulocytosis while on anti-thyroid drug treatment and be advised to seek medical attention if they develop fever, sore throat or oral ulcers. The incidence of agranulocytosis has been found in 0.1%–0.3% of patients on anti-thyroid drugs. Prior to initiation of anti-thyroid drugs, a baseline full blood count and liver function tests are recommended. There are currently no consensus recommendations for routine monitoring of differential white blood cell counts or liver function tests in patients taking anti-thyroid drugs, unless they develop a fever with pharyngitis or symptoms indicative of hepatotoxicity (such as pruritic rash, jaundice, pale stools, dark urine, joint pains, abdominal pain, anorexia or fatigue). Minor, non-vasculitic skin reactions (such as pruritus and transient rashes) can be treated with antihistamines without cessation of anti-thyroid drugs. In cases with persistent minor side effects, switching between types of anti-thyroid drugs and exploring alternatives of RAI ablation or surgery can be considered. The initial dose would depend on the clinical severity of the hyperthyroidism, fT4 level and size of the goitre. Patients with larger goitres and higher levels of fT4 would require a higher dose (such as carbimazole 30 mg every morning). A dose ratio of carbimazole to propylthiouracil in divided doses of 1:10 is recommended when switching from one drug to another. The dose is titrated every 4–6 weeks based on clinical symptoms and the initial level of fT4, as the normalisation of TSH usually lags by 3–6 months. The dose can be halved when the fT4 value halves. Hence, these patients would need to be on chronic follow-up with a family physician, with periodic serum thyroid function tests. Therapy should be continued and weaned, if possible, over 18 months. Remission is defined as normal serum TSH, fT4 and triiodothyronine (T3) for one year after discontinuation of anti-thyroid drugs, and can be achieved in up to 50%–60% of patients after treatment for 2 years. Further prolonged treatment beyond 18 months has not been shown to improve the remission rate. Remission is more likely in young patients with smaller goitres, mild hyperthyroidism, lower T3 levels and those with low TRAb. For patients in whom medical treatment has failed or is contraindicated and those who are not keen on surgery, RAI therapy is another effective option. Contraindications would include women who are considering conception within 6 months, pregnant or lactating women, and those with active Graves’ ophthalmopathy or uncontrolled hyperthyroidism. Patients are expected to develop hypothyroidism after treatment and need lifelong thyroxine replacement. Suitable patients can be referred to nuclear medicine specialists for RAI therapy. This involves administrating RAI via an odourless and colourless oral tablet. Patients must be euthyroid before referral. They would then have to avoid contact with pregnant women and young children for a week and avoid conception for 6 months. Thyroid surgery can be considered in patients whose medical treatment has failed, who have contraindications for RAI therapy, or who have large goitres causing compressive symptoms or goitres with malignant features. The care of patients after thyroidectomy is further discussed later in this article. Patients with thyroid eye disease would benefit from an early referral to ophthalmologists for further assessment.
Thyroid hormone replacement is indicated in patients with hypothyroidism due to Hashimoto’s thyroiditis. The dose of oral levothyroxine that is initiated depends on the initial serum TSH level, age, comorbidities and body weight. In a young and healthy patients with elevated serum TSH, the full replacement dose based on a body weight of 1.6 mcg/kg can be initiated. In elderly patients with mildly elevated serum TSH or those with known coronary artery disease, lower doses of levothyroxine (e.g., 12.5–25.0 mcg daily) can be started, with gradual titration and close monitoring of response and tolerance. To prevent the reduction of absorption, oral levothyroxine should be taken 4 hours apart from other medications such as iron or calcium supplements. Dose adjustments are usually made 4–6 weeks after initiation. Overall, the aim is for serum TSH to be near the lower limit of normal. After the target serum TSH level is reached, patients can be reviewed every 6 months or annually. Most patients will be on indefinite treatment with thyroxine. Surgery is rarely indicated, unless there are significant compressive symptoms despite medical treatment.
General practitioners may also have to care for patients who have had a thyroidectomy, which may be a total thyroidectomy or hemi-thyroidectomy (removal of a lobe of the thyroid gland). The management would differ depending on the initial indication for the surgery (benign or malignant). Thyroid hormone replacement Patients who have had a total thyroidectomy for benign goitres (e.g. Graves’ disease and multinodular goitres) need long-term thyroxine replacement and periodic clinical review with serum thyroid function test. Those who undergo total thyroidectomy for thyroid malignancy are first treated with RAI therapy to ablate residual thyroid tissues, particularly if they have extensive extrathyroidal spread or metastasis (as discussed later in this article). After RAI therapy, patients receive thyroxine replacement at supraphysiological doses to achieve TSH suppression, which reduces the risk of recurrence. The degree of suppression depends on the risk of recurrence of thyroid cancer and must be balanced with the potential side effects from subclinical hyperthyroidism. These include angina in patients with ischaemic heart disease, increased risk of atrial fibrillation and risk of osteoporosis in both genders, especially the elderly and postmenopausal women. Radioactive iodine ablation RAI therapy is used to ablate the remaining thyroid tissue after surgery for better surveillance of cancer recurrence in the future or as adjunctive therapy for local or distant metastatic thyroid cancer. Patients are advised to be on a low iodine diet for 2–3 weeks before treatment to ensure iodine depletion from cells for effective RAI therapy. Levothyroxine is withdrawn before treatment to raise TSH and improve radioiodine uptake. After treatment, patients can expose others to radiation emitting from their bodies or through bodily fluids. They should avoid sharing utensils, sleeping in the same bed with others, sexual contact and close contact with children and pregnant women for some time after treatment. The duration depends on the dose of RAI therapy administered, ranging from 1 day to 3 weeks. Pregnancy should be delayed to 6 months after RAI therapy to ensure that any thyroid dysfunction is adequately treated. For men, the conception should be delayed for about 4 months. Postoperative hypoparathyroidism Transient or permanent postoperative hypoparathyroidism may occur if there is disruption of the blood supply to the parathyroid glands during thyroidectomy. This can result in hypocalcaemia (which may manifest as circumoral numbness, tingling in the fingers and toes, Chvostek’s sign and carpopedal spasm) and hyperphosphataemia. This may be permanent if it is persistent for more than 6 months after surgery. Patients with permanent hypoparathyroidism are treated with oral calcium replacement with activated vitamin D (calcitriol) to facilitate the absorption of calcium and with adequate magnesium replacement. Calcium targets should be within the lower limit of normal (often in the range of 2.00–2.20 mmol/L) to prevent resultant hypercalciuria from the complete correction of hypocalcaemia in these patients.
Patients who have had a total thyroidectomy for benign goitres (e.g. Graves’ disease and multinodular goitres) need long-term thyroxine replacement and periodic clinical review with serum thyroid function test. Those who undergo total thyroidectomy for thyroid malignancy are first treated with RAI therapy to ablate residual thyroid tissues, particularly if they have extensive extrathyroidal spread or metastasis (as discussed later in this article). After RAI therapy, patients receive thyroxine replacement at supraphysiological doses to achieve TSH suppression, which reduces the risk of recurrence. The degree of suppression depends on the risk of recurrence of thyroid cancer and must be balanced with the potential side effects from subclinical hyperthyroidism. These include angina in patients with ischaemic heart disease, increased risk of atrial fibrillation and risk of osteoporosis in both genders, especially the elderly and postmenopausal women.
RAI therapy is used to ablate the remaining thyroid tissue after surgery for better surveillance of cancer recurrence in the future or as adjunctive therapy for local or distant metastatic thyroid cancer. Patients are advised to be on a low iodine diet for 2–3 weeks before treatment to ensure iodine depletion from cells for effective RAI therapy. Levothyroxine is withdrawn before treatment to raise TSH and improve radioiodine uptake. After treatment, patients can expose others to radiation emitting from their bodies or through bodily fluids. They should avoid sharing utensils, sleeping in the same bed with others, sexual contact and close contact with children and pregnant women for some time after treatment. The duration depends on the dose of RAI therapy administered, ranging from 1 day to 3 weeks. Pregnancy should be delayed to 6 months after RAI therapy to ensure that any thyroid dysfunction is adequately treated. For men, the conception should be delayed for about 4 months.
Transient or permanent postoperative hypoparathyroidism may occur if there is disruption of the blood supply to the parathyroid glands during thyroidectomy. This can result in hypocalcaemia (which may manifest as circumoral numbness, tingling in the fingers and toes, Chvostek’s sign and carpopedal spasm) and hyperphosphataemia. This may be permanent if it is persistent for more than 6 months after surgery. Patients with permanent hypoparathyroidism are treated with oral calcium replacement with activated vitamin D (calcitriol) to facilitate the absorption of calcium and with adequate magnesium replacement. Calcium targets should be within the lower limit of normal (often in the range of 2.00–2.20 mmol/L) to prevent resultant hypercalciuria from the complete correction of hypocalcaemia in these patients.
An adult patient presenting with a goitre can be managed according to the flowchart shown in . Goitres with concomitant stridor and respiratory distress, signs of thyroid storm or severe hypothyroidism would warrant an immediate referral to the emergency department. Goitres with red flags suggestive of malignancy (rapid enlargement over a few weeks and/or non-resolving cervical lymphadenopathy) and compressive symptoms (such as dysphagia or dysphonia) should be urgently referred within 1–2 weeks to a thyroid surgeon for further assessment. The following cases should be referred to a thyroid surgeon: All solitary nodules and non-toxic multinodular goitres for further imaging, subsequent FNAC and consideration for surgery. Patients with Graves’ disease in whom medical treatment has failed and have contraindications for RAI therapy, or with large obstructive goitres for consideration of thyroidectomy. The following cases should be referred to an endocrinologist: Pregnant patients with goitres and/or hyperthyroidism or hypothyroidism (these patients should also be co-managed with obstetricians) Goitres with hypothyroidism Subacute thyroiditis Toxic multinodular goitres without compressive symptoms Patients with Graves’ disease who experienced adverse side effects to anti-thyroid drugs or have poor response despite prolonged medical treatment with recurrent relapses Autoimmune thyroid disease with concomitant autoimmune conditions (such as type 1 diabetes mellitus). Patients with Graves’ disease in whom medical treatment has failed and do not have compelling contraindications for RAI therapy can be referred to nuclear medicine for RAI therapy.
Goitre is an abnormal enlargement of the thyroid gland, presenting as a solitary thyroid nodule or diffuse enlargement of the thyroid gland. Patients with goitres can be euthyroid, hypothyroid or hyperthyroid. Red flags include compressive symptoms such as dysphagia, dyspnoea or any voice change. Persistent or suspicious cervical lymphadenopathy is also associated with thyroid malignancy. These patients should be referred to specialists urgently. Initial investigations include a thyroid function test and thyroid ultrasonography. Anti-thyrotropin receptor antibody testing should be ordered for hyperthyroidism and anti-thyroid peroxidase antibody testing for hypothyroidism to differentiate the etiologies of Graves’ and Hashimoto’s thyroiditis, respectively. The choice of specialist referral depends on the suspected aetiology of the goitre. A goitre with thyroid hormone dysfunction likely of autoimmune aetiology should be referred to an endocrinologist, while a solitary thyroid nodule that is euthyroid is usually referred to a surgeon. FNAC is the gold standard for the cytological diagnosis of suspicious thyroid nodules on ultrasonography to rule out malignancy. Patients with thyroid malignancies who have undergone total thyroidectomy may be receiving supraphysiologic doses of thyroxine for TSH suppression to reduce the risk of recurrence. Treatment options for Graves’ disease include anti-thyroid medication (thionamides), RAI and surgery. Carbimazole is an appropriate first-line therapy with regular thyroid function monitoring. Women with uncontrolled Graves’ disease despite anti-thyroid medication who are considering pregnancy should be referred for consideration of thyroid surgery (RAI is contraindicated in those actively planning to conceive). Closing Vignette You asked Angela to return to your clinic once the results were available. The tests revealed overt hyperthyroidism, with TSH <0.005 (range: 0.270–4.200) mIU/L and fT4 >100.0 (range: 12.0–22.0) pmol/L. ESR was within normal limits. After confirming that she did not have asthma, you started her on propranolol 10 mg twice daily and carbimazole 30 mg every morning, ordering repeat thyroid function tests with TSH receptor antibody in four weeks’ time. You counselled her on the symptoms of thyroid storm before her leaving your room and referred her to a tertiary centre for specialist management with an endocrinologist . Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Nil.
There are no conflicts of interest.
|
Transcriptomic and metabolomic analysis reveal the cold tolerance mechanism of common beans under cold stress | 146b042e-d7b1-44e8-87d8-a4b311302e0b | 11909926 | Biochemistry[mh] | With the intensification of the global greenhouse effect, fluctuations in global climate have increased, leading to a higher frequency of extreme temperature events. Consequently, plants are experiencing increased exposure to both cold and heat stress . The impact of low temperatures on plants is becoming increasingly severe, resulting in hindered growth and development, reduced yield, and decreased quality. In extreme cases, it can lead to crop death, causing significant losses in agricultural production . Low temperature is a critical factor affecting the overall developmental stages of plant and crop phenology, often resulting in decreased crop yield and quality . According to statistics, cold stress has resulted in a 60% reduction in global yields of legume crops, such as chickpeas and soybeans, and a 70% decrease in mung beans . When subjected to low-temperature stress, plants undergo a series of physiological and biochemical reactions to mitigate the adverse effects of the environmental temperature. Overall, the physiological strategies of plants to cope with low-temperature stress primarily involve the cell membrane system, antioxidant system, osmotic regulation system, metabolic substances, and regulation of endogenous hormones . Plant metabolites result from gene expression, protein interactions, and various regulatory mechanisms, and they are more closely associated with the plant phenotype than mRNA transcripts and proteins. Therefore, plant metabolites are often analyzed to examine plant phenotypes and provide feedback on environmental stress, facilitating the discovery of specific patterns related to stress tolerance . Sugar acid metabolism and amino acid metabolism play significant roles in mediating plant resistance to low-temperature stress . Mu et al. found that low temperature induces the transcriptional expression of fructose-1,6-diphosphate aldolase (SiFBA5) in Saussurea invalucrata . They observed that overexpressing SiFBA5 tomatoes exhibited enhanced cold tolerance and photosynthetic efficiency. In addition, secondary metabolites play integral roles in physiological processes, including plant growth, development, and defense . Previous studies have shown that plant secondary metabolites respond to low-temperature stress and alleviate oxidative damage by scavenging reactive oxygen species (ROS) that accumulate due to low-temperature stress . Analysis of metabolites from the alpine plant under low-temperature conditions revealed a notable trend: as the temperature decreased, there was a decrease in amino acid accumulation within the leaves, accompanied by an increase in phenolic substances. This observation suggests that under low-temperature conditions, Saussurea involucrata may enhance its cold tolerance by augmenting the production of secondary metabolites, particularly phenolic substances . Meanwhile, research has demonstrated that exposure to a 15 ℃ treatment significantly augments potato stem diameter, root-to-shoot ratio, yield, and concentration of secondary metabolites, especially anthocyanin content, suggesting that appropriate low-temperature treatments can be advantageous for enhancing potato tuber pigmentation . Plant hormones serve as pivotal regulators not only in orchestrating active substances in plant physiological responses and governing plant growth, development, and differentiation but also in mediating stress responses under adverse environmental conditions . Previous studies have shown that plant hormones, acting as vital growth regulators, modulate physiological and biochemical traits under low-temperature stress . They alleviate oxidative damage by enhancing the accumulation of proline, antioxidants, secondary metabolites, and endogenous hormone content, thereby improving plant tolerance to low-temperature stress . Gao et al.‘s research suggests that brassinosteroid may function as an upstream signal of NO, inducing an increase in NO content within the plant body and subsequently inducing protein S-nitrosylation modification to alleviate damage to Chinese cabbage seedlings under low-temperature stress . Meanwhile, another study revealed that the exogenous application of methyl jasmonate (MeJA) could mitigate oxidative damage to Solanum lycopersicum under low-temperature stress by enhancing antioxidant enzyme activity and photosynthetic activity . In addition, transcription factors contribute significantly to plant cold tolerance. Research has shown that long hypocotyl 5 ( HY5 ) can directly regulate the transcription level of CBF or indirectly influence CBF expression through MYB15 , thereby precisely regulating tomato cold tolerance . In addition, functional enrichment analysis of the differentially expressed genes in important modules of Hordeum vulgare L. ‘s cold stress response revealed that these genes are involved in various key pathways related to cold tolerance in plants, such as the ABA signaling pathway, ROS signaling pathway, defense and protective proteins, and degrading proteins . In recent years, multi-omics analysis techniques have gained widespread application in the study of abiotic stress. Transcriptomics and metabolomics techniques provide a more comprehensive understanding by allowing detailed monitoring of metabolic regulation and molecular processes in plants exposed to biotic or abiotic stress environments . Cheng et al. employed multi-omics analysis to identify 18 significant metabolites, two key pathways, and six critical genes responding to low-temperature stress in Helicotrichon virescens . Zhao et al. conducted a comprehensive analysis integrating transcriptomics and metabolomics to explore the alterations in genes and metabolites of cold-tolerant wheat under low-temperature stress. Their findings revealed the pivotal roles of key pathways associated with ABA/JA signaling and proline biosynthesis in regulating wheat cold tolerance . Therefore, multi-omics methodologies offer a comprehensive approach to elucidate cellular life processes from diverse dimensions, enhancing our understanding of the potential mechanisms underlying plant stress resistance. Phaseolus vulgaris L. (common bean), native to Central and Southern America, is the most extensively planted, cultivated, and consumed legume globally. Renowned for its high protein content and abundant nutrients, it is a vital source of plant-based protein for human consumption . Common beans thrive in warm conditions and are susceptible to frost. Low temperatures below 10 ℃ can significantly impede their growth and development, rendering them a cold-sensitive vegetable crop . To delineate the variances between cold-resistant and cold-sensitive common bean materials, we conducted a comprehensive analysis integrating phenotype physiological assessments with multi-omics analysis to unveil significant changes in physiological parameters, genes, and metabolites. Through comparative analysis of physiological indicators, transcriptome, and metabolomic profiles of common beans subjected to low-temperature stress, we identified flavonoid metabolism and plant hormone signal transduction as key components of common bean response to low temperature. These findings offer valuable insights into the mechanisms underlying common bean cold tolerance and contribute to the optimal utilization of cold-tolerant resources in common bean breeding programs.
Plant material and cold treatments The seeds of the two common bean materials were provided by the Vegetable Research Laboratory, College of Horticulture, Sichuan Agricultural University. The seeds of the cold-sensitive ‘Bai Bu Lao’ (BBL) and cold-tolerant ‘Wei Yuan’ (WY) common bean varieties were soaked at room temperature for 4 h and then germinated at 25 ℃ in the absence of light. After germination, the seeds were exposed to white light and subsequently sown in nutrient bowls. The substrate composition consisted of peat, vermiculite, and perlite in a 3:1:1 ratio. The cultivation conditions included a 12 h light cycle, a day/night temperature of 25 ± 2 ℃/18 ± 2 ℃, a light intensity of 300 µ mol/m 2 ·s, and a relative humidity of 60 ± 5%. Experimental treatments were conducted when the seedlings reached the two-leaf and one-heart stage. For the cold stress treatment, the temperature in the artificial climate box was set to 5 ℃ for both day and night). The light cycle was 12 h of light and 12 h of dark, with a light intensity of 100 µ mol/m 2 ·s and relative humidity of 60 ± 5%. Common bean leaves were collected after 0, 6, and 24 h of low-temperature treatment, with three biological replicates for each time point. The samples were frozen in liquid nitrogen and stored at -80 ℃ for future use. The entire experiment consists of six treatments, namely BBL0, BBL6, BBL24, WY0, WY6, and WY24. Physiological measurements Relative conductivity was measured using a conductivity meter (Shanghai INESA Scientific Instrument Co., Ltd, DDS-307). Malondialdehyde (MDA) content was determined using the thiobarbituric acid method . Leaf samples were crushed in a mortar after adding 5 ml of precooled phosphate buffer, the homogenate was centrifuged for 20 min after low temperature grinding. The supernatant as the enzyme crude extract were stored at 4 °C for superoxide dismutase (SOD) and peroxidase (POD) activities assay. SOD activity was measured using the nitroblue tetrazolium method, POD activity using the guaiacol method . cDNA library construction, sequencing, and data analysis Transcriptome sequencing was performed by Biomarker Technologies Co., Ltd. (Beijing, China). Each treatment included three biological replicates, resulting in 18 samples. First, RNA was extracted, followed by mRNA purification and fragmentation. cDNA was then synthesized, ligated, and the ligation products were purified. After fragment selection, the library was constructed. Following a quality inspection of the libraries, sequencing was performed in the PE150 mode using a high-throughput sequencing platform (Illumina NovaSeq 6000 platform, San Diego, USA). After sequencing, data analysis was conducted using the bioinformatics pipeline provided by BMKCloud ( www.biocloud.net ). The raw data was filtered to obtain clean data, which were then aligned to the common bean reference genome ( https://phytozome-next.jgi.doe.gov/info/Pvulgaris_v2_1 ) using HISAT2 software. Subsequently, the reads from the comparison pairs were assembled, and the transcriptome was reconstructed using String Tie for subsequent analysis. To detect differentially expressed genes, a | Fold Change ≥ 2 | and false discovery rate (FDR) < 0.01 were employed as screening criteria. The differentially expressed genes (DEGs) were compared against the Gene Ontology (GO) database ( http://www.geneontology.org/ ) to annotate their functional terms, and the number of genes associated with each GO term was tabulated . Functional categories enriched with significant DEGs compared to the genomic background were identified using FDR ≤ 0.05 as the threshold for significant enrichment. Next, the Kyoto Encyclopedia of Genes and Genomes (KEGG) database ( http://www.genome.jp/tools/kaas/ ) was utilized to annotate and classify the DEGs for pathway function . The enrichment results were analyzed using the hypergeometric test method with the ClusterProfiler package, and visualized using bubble plots and bar charts. Significant enrichment of DEGs in pathways was determined using FDR ≤ 0.05 as the threshold. Real-Time quantitative polymerase chain reaction (RT-qPCR) validation Briefly, 12 genes were randomly selected from the differentially expressed genes (DEGs) to validate the sequencing results using RT-qPCR. Total RNA extraction was carried out using the RNA preparation kit from Tiangen Biochemical Technology Co., Ltd. (Beijing, China), followed by cDNA synthesis using the PrimeScript™ FAST RT reagent kit with gDNA Eraser kit (TaKaRa, Japan). Gene primers were designed using SnapGene software, and their specificity was confirmed using the NCBI database. The primers were synthesized by Shenggong Biotechnology Co., Ltd. (Shanghai, China), and the specific primer sequences are provided in Table . RT-qPCR analysis was performed using the 2× SYBR qPCR Mix from Jiangsu Baishimei Biotechnology Co., Ltd (Lianyungang, China) on a Bio-Rad CFX96 PCR (Bio-Rad, USA) instrument. The reaction volume was 20 µl, consisting of 10 µl 2× SYBR qPCR Mix, 0.5 µl each of forward and reverse primers (each 10 µ M), 1 µl cDNA, and 8 µl ddH 2 O. The RT-qPCR reaction followed a two-step amplification method. The reaction procedure included pre-denaturation at 95 ℃ for 30s, followed by denaturation at 95 ℃ for 10s, and annealing at 60 ℃ for 30s, for a total of 39 cycles. In addition, a dissolution curve analysis was performed. Notably, the expression data were normalized using Actin-11 as the reference gene , and the relative expression data were calculated using the 2 −∆∆Ct method . Metabolomic profiling The qualitative and quantitative analysis of metabolites was performed based on the self-built database GB-PLANT of Beijing Biomarker Technologies Co., LTD. The metabolites in the samples were analyzed using mass spectrometry for both qualitative and quantitative determination. Characteristic ions of each substance were selected through triple quadrupole screening, and the signal intensity of these characteristic ions was recorded in the detector. After obtaining the mass spectrometry data of metabolites from different samples, the peak areas of all substance mass spectrometry peaks were integrated, and the peak integration was corrected for the same metabolite across different samples. Metabolites Extraction: The sample extracts were analyzed using an UPLC-ESI-MS/MS system (UPLC, Waters Acquity I-Class PLUS; MS, Applied Biosystems QTRAP 6500+). The analytical conditions were as follows, UPLC: column, Waters HSS-T3 (1.8 μm, 2.1 mm * 100 mm); The mobile phase was consisted of solvent A, pure water with 0.1% formic acid and 5mM Ammonium acetate, and solvent B, acetonitrile with 0.1% formic acid. Sample measurements were performed with a gradient program that employed the starting conditions of 98% A, 2% B and kept for 1.5 min. Within 5.0 min, a linear gradient to 50% A, 50% B was programmed, Within 9.0 min, a linear gradient to 2% A, 98% B was programmed, and a composition of 2% A, 98% B was kept for 1 min. Subsequently, a composition of 98% A, 2% B was adjusted within 1 min and kept for 3 min. The flow velocity was set as 0.35 mL per minute; The column oven was set to 50 °C; The injection volume was 2 uL. The effluent was alternatively connected to an ESI-triple quadrupole-linear ion trap (QTRAP)-MS. The sample extracts were analyzed using a UPLC-ESI-MS/MS system (UPLC, Waters Acquity I-Class PLUS; MS, Applied Biosystems QTRAP 6500+). Data analysis: After normalization of the original peak area information to the total peak area, subsequent analyses were performed. Principal component analysis (PCA) and Spearman correlation analysis were employed to assess the repeatability of the samples within groups and the quality control samples. The identified compounds were classified, and their pathway information was determined using KEGG, HMDB, and LIPID MAPS databases. Based on the grouping information, fold changes were calculated and compared. The significant differences for each compound were assessed using a t-test to calculate the p-value. OPLS-DA modeling was performed using the R language package “ropls”, and the reliability of the model was verified through 200 times permutation tests. The VIP value of the model was calculated using multiple cross-validation. Differential metabolites (DEMs) were screened by combining the fold changes, p-values, and VIP values of the OPLS-DA model. The screening criteria were FC > 1, p-value < 0.05, and VIP > 1. The significance of KEGG pathway enrichment for the differential metabolites was calculated using the hypergeometric distribution test. Integrative analysis of transcriptome and metabolome The DEGs and DEMs with a Pearson correlation coefficient (r) > |0.8| was selected to establish a correlation network, which was visually analyzed using Cytoscape (v.3.10.2) software . Statistical analyses Physiological data were organized using Excel 2019 software. Single factor analysis (ANOVA) was performed using SPSS 27.0 software, with multiple comparisons conducted using the Duncan method. The significance threshold was set at P < 0.05. Figures were generated using Origin 2021 software. Data are represented as the mean ± standard error (SE) of three biological replicates.
The seeds of the two common bean materials were provided by the Vegetable Research Laboratory, College of Horticulture, Sichuan Agricultural University. The seeds of the cold-sensitive ‘Bai Bu Lao’ (BBL) and cold-tolerant ‘Wei Yuan’ (WY) common bean varieties were soaked at room temperature for 4 h and then germinated at 25 ℃ in the absence of light. After germination, the seeds were exposed to white light and subsequently sown in nutrient bowls. The substrate composition consisted of peat, vermiculite, and perlite in a 3:1:1 ratio. The cultivation conditions included a 12 h light cycle, a day/night temperature of 25 ± 2 ℃/18 ± 2 ℃, a light intensity of 300 µ mol/m 2 ·s, and a relative humidity of 60 ± 5%. Experimental treatments were conducted when the seedlings reached the two-leaf and one-heart stage. For the cold stress treatment, the temperature in the artificial climate box was set to 5 ℃ for both day and night). The light cycle was 12 h of light and 12 h of dark, with a light intensity of 100 µ mol/m 2 ·s and relative humidity of 60 ± 5%. Common bean leaves were collected after 0, 6, and 24 h of low-temperature treatment, with three biological replicates for each time point. The samples were frozen in liquid nitrogen and stored at -80 ℃ for future use. The entire experiment consists of six treatments, namely BBL0, BBL6, BBL24, WY0, WY6, and WY24.
Relative conductivity was measured using a conductivity meter (Shanghai INESA Scientific Instrument Co., Ltd, DDS-307). Malondialdehyde (MDA) content was determined using the thiobarbituric acid method . Leaf samples were crushed in a mortar after adding 5 ml of precooled phosphate buffer, the homogenate was centrifuged for 20 min after low temperature grinding. The supernatant as the enzyme crude extract were stored at 4 °C for superoxide dismutase (SOD) and peroxidase (POD) activities assay. SOD activity was measured using the nitroblue tetrazolium method, POD activity using the guaiacol method .
Transcriptome sequencing was performed by Biomarker Technologies Co., Ltd. (Beijing, China). Each treatment included three biological replicates, resulting in 18 samples. First, RNA was extracted, followed by mRNA purification and fragmentation. cDNA was then synthesized, ligated, and the ligation products were purified. After fragment selection, the library was constructed. Following a quality inspection of the libraries, sequencing was performed in the PE150 mode using a high-throughput sequencing platform (Illumina NovaSeq 6000 platform, San Diego, USA). After sequencing, data analysis was conducted using the bioinformatics pipeline provided by BMKCloud ( www.biocloud.net ). The raw data was filtered to obtain clean data, which were then aligned to the common bean reference genome ( https://phytozome-next.jgi.doe.gov/info/Pvulgaris_v2_1 ) using HISAT2 software. Subsequently, the reads from the comparison pairs were assembled, and the transcriptome was reconstructed using String Tie for subsequent analysis. To detect differentially expressed genes, a | Fold Change ≥ 2 | and false discovery rate (FDR) < 0.01 were employed as screening criteria. The differentially expressed genes (DEGs) were compared against the Gene Ontology (GO) database ( http://www.geneontology.org/ ) to annotate their functional terms, and the number of genes associated with each GO term was tabulated . Functional categories enriched with significant DEGs compared to the genomic background were identified using FDR ≤ 0.05 as the threshold for significant enrichment. Next, the Kyoto Encyclopedia of Genes and Genomes (KEGG) database ( http://www.genome.jp/tools/kaas/ ) was utilized to annotate and classify the DEGs for pathway function . The enrichment results were analyzed using the hypergeometric test method with the ClusterProfiler package, and visualized using bubble plots and bar charts. Significant enrichment of DEGs in pathways was determined using FDR ≤ 0.05 as the threshold.
Briefly, 12 genes were randomly selected from the differentially expressed genes (DEGs) to validate the sequencing results using RT-qPCR. Total RNA extraction was carried out using the RNA preparation kit from Tiangen Biochemical Technology Co., Ltd. (Beijing, China), followed by cDNA synthesis using the PrimeScript™ FAST RT reagent kit with gDNA Eraser kit (TaKaRa, Japan). Gene primers were designed using SnapGene software, and their specificity was confirmed using the NCBI database. The primers were synthesized by Shenggong Biotechnology Co., Ltd. (Shanghai, China), and the specific primer sequences are provided in Table . RT-qPCR analysis was performed using the 2× SYBR qPCR Mix from Jiangsu Baishimei Biotechnology Co., Ltd (Lianyungang, China) on a Bio-Rad CFX96 PCR (Bio-Rad, USA) instrument. The reaction volume was 20 µl, consisting of 10 µl 2× SYBR qPCR Mix, 0.5 µl each of forward and reverse primers (each 10 µ M), 1 µl cDNA, and 8 µl ddH 2 O. The RT-qPCR reaction followed a two-step amplification method. The reaction procedure included pre-denaturation at 95 ℃ for 30s, followed by denaturation at 95 ℃ for 10s, and annealing at 60 ℃ for 30s, for a total of 39 cycles. In addition, a dissolution curve analysis was performed. Notably, the expression data were normalized using Actin-11 as the reference gene , and the relative expression data were calculated using the 2 −∆∆Ct method .
The qualitative and quantitative analysis of metabolites was performed based on the self-built database GB-PLANT of Beijing Biomarker Technologies Co., LTD. The metabolites in the samples were analyzed using mass spectrometry for both qualitative and quantitative determination. Characteristic ions of each substance were selected through triple quadrupole screening, and the signal intensity of these characteristic ions was recorded in the detector. After obtaining the mass spectrometry data of metabolites from different samples, the peak areas of all substance mass spectrometry peaks were integrated, and the peak integration was corrected for the same metabolite across different samples. Metabolites Extraction: The sample extracts were analyzed using an UPLC-ESI-MS/MS system (UPLC, Waters Acquity I-Class PLUS; MS, Applied Biosystems QTRAP 6500+). The analytical conditions were as follows, UPLC: column, Waters HSS-T3 (1.8 μm, 2.1 mm * 100 mm); The mobile phase was consisted of solvent A, pure water with 0.1% formic acid and 5mM Ammonium acetate, and solvent B, acetonitrile with 0.1% formic acid. Sample measurements were performed with a gradient program that employed the starting conditions of 98% A, 2% B and kept for 1.5 min. Within 5.0 min, a linear gradient to 50% A, 50% B was programmed, Within 9.0 min, a linear gradient to 2% A, 98% B was programmed, and a composition of 2% A, 98% B was kept for 1 min. Subsequently, a composition of 98% A, 2% B was adjusted within 1 min and kept for 3 min. The flow velocity was set as 0.35 mL per minute; The column oven was set to 50 °C; The injection volume was 2 uL. The effluent was alternatively connected to an ESI-triple quadrupole-linear ion trap (QTRAP)-MS. The sample extracts were analyzed using a UPLC-ESI-MS/MS system (UPLC, Waters Acquity I-Class PLUS; MS, Applied Biosystems QTRAP 6500+). Data analysis: After normalization of the original peak area information to the total peak area, subsequent analyses were performed. Principal component analysis (PCA) and Spearman correlation analysis were employed to assess the repeatability of the samples within groups and the quality control samples. The identified compounds were classified, and their pathway information was determined using KEGG, HMDB, and LIPID MAPS databases. Based on the grouping information, fold changes were calculated and compared. The significant differences for each compound were assessed using a t-test to calculate the p-value. OPLS-DA modeling was performed using the R language package “ropls”, and the reliability of the model was verified through 200 times permutation tests. The VIP value of the model was calculated using multiple cross-validation. Differential metabolites (DEMs) were screened by combining the fold changes, p-values, and VIP values of the OPLS-DA model. The screening criteria were FC > 1, p-value < 0.05, and VIP > 1. The significance of KEGG pathway enrichment for the differential metabolites was calculated using the hypergeometric distribution test.
The DEGs and DEMs with a Pearson correlation coefficient (r) > |0.8| was selected to establish a correlation network, which was visually analyzed using Cytoscape (v.3.10.2) software .
Physiological data were organized using Excel 2019 software. Single factor analysis (ANOVA) was performed using SPSS 27.0 software, with multiple comparisons conducted using the Duncan method. The significance threshold was set at P < 0.05. Figures were generated using Origin 2021 software. Data are represented as the mean ± standard error (SE) of three biological replicates.
Cold stress significantly affected the phenotype and physiological indicators of two common bean varieties The phenotypes and associated physiological indices of the cold-sensitive variety ‘Bai Bu Lao’ (BBL) and the cold-tolerant variety ‘Wei Yuan’ (WY) were analyzed following exposure to 5 °C for 0, 6, and 24 h. As shown in Fig. A, under normal temperature conditions (0 h), the two bean varieties had no significant phenotypic differences. After 6 h of exposure to 5 °C, all leaves of the BBL beans wilted and drooped, whereas only some leaves of the WY beans exhibited wilting and drooping. The extent of cold damage was significantly greater in BBL beans compared to WY beans. After 24 h of exposure to 5 °C, the leaves of the BBL beans continued to droop and gradually became wrinkled, showing clear signs of dehydration. In contrast, the leaves of the WY beans ceased drooping and began to flatten, gradually resuming growth. Furthermore, the physiological indices of BBL and WY beans under low-temperature stress were measured. After 24 h of exposure to 5 °C, compared to the 0 h time point, the relative conductivity of BBL and WY beans increased by 79.08% and 108.34% ( P < 0.05) (Fig. B), the malondialdehyde content increased by 51.54% and 85.76% (Fig. C), and the SOD activity increased by 16.72% and 33.86% ( P < 0.05), respectively (Fig. D). Compared with the 0 h time point, exposure to 5 °C for 6 h and 24 h significantly increased the relative conductivity, malondialdehyde content, SOD activity, and POD activity in both BBL and WY beans (Fig. C, D, E). Notably, the increases in these physiological indices were more significant in WY beans than in BBL beans. Collectively, these results indicate a difference in cold tolerance between BBL and WY beans, with WY beans demonstrating greater cold-tolerance than BBL beans. RNA sequencing, assembly, and real-time quantitative polymerase chain reaction (RT-qPCR) validation Transcription group sequencing of BBL and WY beans under low-temperature treatment for 0 h, 6 h, and 24 h was performed using a high-flow sequencing platform. Each sample was analyzed in triplicate at each time point, resulting in 18 samples. As shown in Table , sequencing quality control yielded a total of 114.91 Gb of Clean Data, with a Q30 alkali ratio of ≥ 93.40% for each sample and a ratio between 91.43% and 96.44% for reads and the reference genome. This demonstrates that the output and quality of the archived sequencing data meet the requirements for further analysis and are suitable for subsequent bioinformatics analysis. At the same time, 12 DEGs were randomly selected for RT-qPCR analysis across the six treatments (BBL0, BBL6, BBL24, WY0, WY6, and WY24) to verify the reliability of the transcriptome data. Although some differences in gene expression were observed, as shown in Figure , the RT-qPCR results for 11 DEGs were consistent with the transcriptome expression trends, indicating that the transcriptome data from this study is reliable and suitable for further analysis. Overview of metabolic profiles in common beans under cold stress To investigate the effect of cold stress on the metabolic levels of different common bean varieties, we conducted qualitative and quantitative analyses of metabolites in the 18 samples using high-throughput and extensive targeted detection technology. A total of 923 metabolites were detected, including carboxylic acids and derivatives (12.2%), organoxygen compounds (8.5%), flavonoids (4.4%), prenol lipids (4.4%), fatty acyls (4.3%), benzene and substituted derivatives (2.3%), pure nucleosides (1.5%), and stenides (1.5%), rods and steroid derivatives (1.5%), phenols (1.4%), organitogen compounds (1.3%), and others (58.1%) (Fig. A). In the clustering heatmap analysis, BBL and WY beans, along with their respective biological replicates for each treatment, clustered together, suggesting high repeatability and correlation within the output data (Fig. B). The heat map results further reveal specific accumulation of certain metabolites in BBL beans, while others exhibit accumulation exclusively in WY beans. In addition, under cold stress, disparities in metabolite accumulation patterns between the two genotypes suggest that variations in cold tolerance within the common bean species may stem from the accumulation of DEMs. Moreover, PCA analysis was conducted on the dataset comprising 923 metabolites. The PCA score plot indicates that the first principal component (PC1) and the second principal component (PC2) account for 24.3% and 22.1% of the total variance, respectively (Fig. C). The cumulative variance explained by PC1 to PC5 is 80% (Fig. D). The close clustering of duplicate and mixed samples suggests the reproducibility and reliability of this experiment. Additionally, a Variable Importance in Projection (VIP) analysis based on the PLS-DA model was performed to identify the most informative metabolites that differentiate the two contrasting common bean varieties. The results indicated that the PLS-DA analysis effectively separated the metabolites of the two bean varieties and their three different treatment periods along the t1 and t2 axes (Figure A). The permutation test of the PLS-DA model also demonstrated a good independence between the training and testing datasets (Figure B). Analysis of the VIP values for all metabolites revealed that 428 metabolites had VIP values greater than 1 (Table ). The top twenty metabolites based on their VIP values are listed in the figure (Figure C). Joint analysis of DEGs and DEMs in common beans under cold stress This experiment established seven groups for comparison, including intra-group comparisons of BBL and WY and inter-group comparisons of BBL and WY. Initially, the study scrutinized the influence of cold stress on different common bean varieties. In the cold-sensitive common bean BBL, compared to the 0 h time point, 1923 upregulated and 1937 downregulated DEGs were identified after 6 h of low-temperature treatment (Fig. A). In addition, there were 177 upregulated and 192 downregulated DEMs in BBL (Fig. B). The KEGG joint enrichment analysis revealed significant enrichment of DEGs in pathways such as Isoflavonoid biosynthesis, Flavonoid biosynthesis, beta-Alanine metabolism, valine, leucine and isoleucine degradation, and phenylpropanoid biosynthesis pathways. Additionally, DEMs were significantly enriched in the Lysine degradation and Star and cross-metabolism pathways (Fig. C, Figure ). The GO enrichment analysis of DEGs identified annotations of 19 biological processes, three cellular components, and 12 molecular functions (Fig. G). After 24 h of low-temperature treatment, there are 3487 upregulated and 3489 downregulated DEGs in BBL (Fig. A). In addition, there were 252 upregulated and 231 downregulated DEMs in BBL (Fig. B). The KEGG joint enrichment analysis revealed significant enrichment of DEGs in pathways such as Plant hormone signal transduction, Flavonoid biosynthesis, Isoflavonoid biosynthesis, Galactose metabolism, Benzoxazinoid biosynthesis, Zeatin biosynthesis, and Phenolpropanoid biosynthesis. On the other hand, DEMs were significantly enriched in Cyanoamino acid metabolism and valine, Leucine, and isoleucine degradation (Fig. D, Figure ). The GO enrichment analysis for DEGs identified annotations for 20 biological processes, three cellular components, and 13 molecular functions (Figure A). In the cold-resistant common bean WY, compared to the 0 h time point, 1290 upregulated and 1359 downregulated DEGs were identified after 6 h of low-temperature treatment (Fig. A). Additionally, there were 180 upregulated and 244 downregulated DEMs in WY (Fig. B). The KEGG joint enrichment analysis revealed significant enrichment of DEGs in pathways such as Flavonoid biosynthesis, Plant hormone signal translation, Stilbenoid, diarylheptanoid and ginger biosynthesis, Flavone and flavonol biosynthesis, and Phenolpropanoid biosynthesis. DEMs were significantly enriched in Glyoxylate and dicarboxylate metabolism (Fig. E, Figure ). The GO enrichment analysis of DEGs identified annotations for 20 biological processes, three cellular components, and 12 molecular functions (Figure A). After 24 h of low-temperature treatment, there were 3762 upregulated and 3566 downregulated DEGs in WY (Fig. A). In addition, there were 182 upregulated and 256 downregulated DEMs in WY (Fig. B). The KEGG combined enrichment analysis revealed significant enrichment of DEGs in pathways such as isoflavone biosynthesis, Isoflavonoid biosynthesis, Plant hormone signal translation, Flavonoid biosynthesis, Galactose metabolism, Flavone and Flavonol biosynthesis, Carotenoid biosynthesis, valine, leucine and isoleucine degradation, and Glutathione metabolism (Fig. F, Figure ). Furthermore, the GO enrichment analysis of DEGs identified annotations for 20 biological processes, three cellular components, and 13 molecular functions (Figure B). Integrating the enriched metabolic pathways with differential genes and metabolites after low-temperature treatment in BBL and WY, most of the pathways are associated with sugar and acid metabolism, amino acid metabolism, flavonoid biosynthesis metabolism, and other metabolic pathways. Moreover, pathways such as plant hormone signal transduction, photosynthesis and starch, and sucrose metabolism were also identified. Notably, no significant difference was observed in GO enrichment between BBL and WY common beans after low-temperature treatment. Joint analysis of DEGs and DEMs in cold-sensitive BBL and cold-tolerant WY under cold stress Furthermore, we compared the effects of cold stress on two common bean varieties with differing cold tolerance. At 0 h, the cold-tolerant variety WY had 937 upregulated DEGs and 1170 down-regulated DEGs compared to the cold-sensitive bean variety BBL (Fig. A). There were also 229 upregulated DEMs and 196 down-regulated DEMs (Fig. B). KEGG pathway enrichment analysis revealed that DEGs were significantly enriched in pathways related to galactose metabolism, ascorbate and aldarate metabolism, valine, leucine, and isoleucine degradation, plant hormone signal transduction, isoquinoline alkaloid biosynthesis, and phenylalanine metabolism (Figure A, B). DEMs were significantly enriched in pathways such as fatty acid degradation, isoflavonoid biosynthesis, and cyanoamino acid metabolism (Figure A, B). After 6 h of low-temperature treatment, there were 1396 upregulated DEGs and 1384 downregulated DEGs in BBL compared to WY (Fig. A). Additionally, BBL had 249 upregulated DEMs and 240 downregulated DEMs compared to WY (Fig. B). The KEGG pathway enrichment analysis showed that DEGs were significantly enriched in pathways related to flavonoid biosynthesis, isoflavonoid biosynthesis, phenylpropanoid biosynthesis, amino sugar and nucleotide sugar metabolism, ascorbate and aldarate metabolism, galactose metabolism, and starch and sucrose metabolism. DEMs were significantly enriched in pathways such as isoflavonoid biosynthesis, isoquinoline alkaloid biosynthesis, and alpha-linolenic acid metabolism (Fig. A, Figure C). After 24 h of low-temperature treatment, there were 995 upregulated DEGs and 1163 downregulated DEGs in BBL compared to WY (Fig. A); There were also 194 upregulated DEMs and 282 downregulated DEMs in BBL compared to WY (Fig. B). The KEGG pathway enrichment analysis revealed significant enrichment of DEGs in pathways such as galactose metabolism, ascorbate and aldarate metabolism, amino sugar and nucleotide sugar metabolism, alpha-linolenic acid metabolism, isoflavonoid biosynthesis, plant hormone signal transduction, phenylpropanoid biosynthesis, flavonoid biosynthesis, and pentose and glucuronate interconversions (Fig. B, Figure D). DEMs were significantly enriched in pathways such as ascorbate and aldarate metabolism, isoflavonoid biosynthesis, and valine, leucine, and isoleucine degradation (Fig. B, Figure D). These findings indicate that at 6 h of low-temperature treatment, the cold-sensitive BBL and cold-tolerant WY common bean varieties exhibited the highest number of DEGs and DEMs. These were significantly enriched in pathways associated with flavonoid biosynthesis, sugar acid metabolism, amino acid metabolism, and plant hormone signal transduction. Therefore, in subsequent experiments, our focus was directed toward analyzing the response of common beans to low-temperature stress, specifically examining sugars and amino acids, flavonoid secondary metabolites, plant hormones, and associated genes. The DEGs petalogram results show that following low-temperature treatment, there was one upregulated DEG, Phvul.007G135400 (Fig. C), shared across all comparison groups of the cold-sensitive BBL cowpea and cold-tolerant WY common bean. There were also 25 downregulated DEGs (Fig. D). Moreover, the DEMs petalogram results reveal that following low-temperature treatment, three upregulated DEMs (Fig. D) were shared among all comparison groups of BBL and WY common bean, namely L-Tryptophan, Vasicinol, and Shizukaol D. Notably, there were eight down-regulated DEMs (Fig. E). Analysis of transcription factors in DEGs Transcription factors play a crucial regulatory role in the response mechanism to cold stress. To elucidate this regulatory network, transcription factor prediction was performed on all DEGs (11212) identified from transcriptome sequencing. The predicted results showed that the top six transcription factor families with the most annotations were RLK-Pelle-DLSV (93), AP2/ERF-ERF (76), bHLH (71), MYB (64), WRKY (56), and NAC (54), respectively (Fig. A). Most of the DEGs in transcription factors are implicated in the response to low-temperature stress, including bHLH and MYB families, which have been previously reported to regulate plant cold tolerance . The bHLH transcription factor plays a multifaceted role, regulating physiological and biochemical processes such as signal transduction. In addition, it is actively involved in modulating various stress responses, including those to drought and cold stress. This study identified 71 bHLH gene family transcription factors from the pool of DEGs (Fig. B). Among them, the cold stress treatment for 24 h significantly upregulated the expression of six bHLH transcription factors (Phvul.003G181900.v2.1, Phvul.002G018300.v2.1, Phvul.003G140800.v2.1, Phvul.002G283100.v2.1, Phvul.001G121200.v2.1, and Phvul.003G157100.v2.1) in cold-sensitive BBL beans. Cold stress for 6 h significantly upregulated the expression of six bHLH transcription factors (Phvul.009G137400.v2.1, Phvul.010G120000.v2.1, Phvul.002G017600.v2.1, Phvul.006G196600.v2.1, Phvul.001G085500.v2.1, and Phvul.002G088400.v2.1) in cold-tolerant WY beans. Simultaneously, low-temperature treatment for 24 h significantly downregulated the expression of five bHLH transcription factors (Phvul.003G067500.v2, Phvul.006G028500.v2.1, Phvul.006G184600.v2.1, Phvul.001G031300.v2.1, and Phvul.009G023500.v2.1) in cold-sensitive BBL and cold-tolerant WY beans. In addition, Phvul.001G023200.v2.1 and Phvul.003G231200.v2.1 exhibited contrasting trends between cold-sensitive BBL and cold-tolerant WY beans under cold stress conditions (Fig. B). The MYB transcription factor family is one of the largest in plants, and numerous research findings support its potential as a transcription factor for plant breeding and enhancement. In this study, 64 MYB gene family transcription factors were identified from the pool of DEGs (Fig. C). Among them, low-temperature treatment for 24 h significantly downregulated the expression of six MYB transcription factors (Phvul.002G170500.v2.1, Phvul.010G009900.v2.1, Phvul.003G176800.v2.1, Phvul.008G262900.v2.1, and Phvul.009G187700.v2.1) in cold-sensitive BBL and cold-tolerant WY beans, while significantly upregulating the expression of 25 MYB transcription factors in both varieties. At the same time, the changing trends of Phvul.007G108500-v2.1, Phvul.008G041500-v2.1, and Phvul.007G215800-v2.1 in BBL and WY beans after 24 h of low-temperature treatment are inconsistent. The effect of cold stress on sugar and amino acids in common beans Previous research has demonstrated that low-temperature stress significantly impacts sugar metabolism, amino acid synthesis, and metabolic pathways in plants . Therefore, we integrated DEGs and DEMs to construct pathway diagrams for sugar and amino acid metabolism. As shown in Fig. , low-temperature stress significantly increased the levels of galactinol, sucrose, melibiose, trehalose, and maltose in both cold-sensitive BBL common beans and cold-tolerant WY cowpeas, while it decreased the level of fructose-6P. Additionally, the expression patterns of raffinose, manninotriose, D-galactose, D-fructose, glucose, and glucose-6P differed inconsistently between cold-sensitive BBL common beans and cold-tolerant WY cowpeas (Fig. A). Concurrently, the expression levels of genes encoding myo-inositol galactosyltransferase (Phvul.001G223700.v2.1, Phvul.007G203400.v2.1), cotton galactose synthase (Phvul.004G007100.v2.1), hexokinase (Phvul.010G144900.v2.1), fructokinase (Phvul.011G091600.v2.1), pyruvate kinase (Phvul.004G000800.v2.1), phosphofructokinase/phosphatase (Phvul.003G150400.v2.1, Phvul.008G172400.v2.1, Phvul.009G054400.v2.1), α-amylase (Phvul.006G185000.v2.1), and β-amylase (Phvul.003G226900.v2.1) also showed a marked increase (Fig. B). Upon entry into the citric acid cycle, low-temperature stress significantly reduced the levels of 2-ketoglutaric acid in cold-sensitive BBL common beans and cold-tolerant WY common beans and influenced the aconitic acid content. The aconitic acid content decreased significantly with prolonged exposure to low temperature in BBL common beans, whereas in WY common beans, it initially decreased and then increased (Fig. A). Additionally, the expression levels of genes encoding isocitrate dehydrogenase (Phvul.002G114600.v2.1, Phvul.007G150400.v2.1) increased significantly with prolonged low-temperature treatment (Fig. B). Under low-temperature stress conditions, the levels of certain amino acids and their downstream metabolites, including proline, L-leucine, L-valine, tyrosine, D-aspartic acid, and L-aspartate, significantly increased in both cold-sensitive BBL common beans and cold-tolerant WY common beans. The levels of L-lysine, glutamine, and citrulline in both cold-sensitive BBL common beans and cold-tolerant WY common beans exhibited a pattern of initial increase followed by subsequent decrease with prolonged exposure to low temperature. Furthermore, the alterations in the levels of glutamate, GABA, citruline, arginine, L-isoleucine, tyramine, and L-asparagine exhibited divergent trends between cold-sensitive BBL common beans and cold-tolerant WY common beans (Fig. A). Meanwhile, the expression levels of genes encoding 2-Oxoglutarate dehydrogenase (Phvul.010G017700.v2.1, Phvul.009G156400.v2.1), isobutyryl-CoA dehydrogenase (Phvul.011G073500.v2.1), and 3-Methylcrotonyl-CoA carboxylase (Phvul.003G291600.v2.1) significantly decreased with prolonged low-temperature treatment in both cold-sensitive BBL common beans and cold-tolerant WY common beans. The expression levels of genes encoding tyrosine metabolism enzyme (Phvul.006G143500.v2.1) and aspartate aminotransferase (Phvul.001G250600.v2.1) increased with prolonged low-temperature treatment in both cold-sensitive BBL common beans and cold-tolerant WY common beans (Fig. B). The effect of cold stress on secondary metabolites of flavonoids in common beans Flavonoids constitute a crucial category of secondary metabolites in plants, exerting significant biological functions. Herein, we identified 19 types of DEMs and 15 types of DEGs associated with the biosynthesis and metabolism of flavonoids in common beans. As shown in Fig. , cold stress significantly augmented the levels of L-tyrosine, L-phenylalanine, naringenin chalcone, naringenin, genistein, apigenin, apigenin 7-Glucoside, and pratenstein, while significantly diminishing the content of coumestrol and genistein 7,4’-Di-O-β-D-glucopyranoside. These substances exhibited consistent trends in both BBL and WY beans. The levels of eriodictyol, quercetin, isoquercitrin, isosakuranetin, isoliquiritigenin, 7,4 ‘- Dihydroxyflavone, trans-5-O - (p-Coumaroyl) shikimate, luteolin, and chloroiol exhibited distinct trends between BBL and WY beans (Fig. A). At the same time, cold stress significantly upregulated the expression of genes encoding shikimate O-hydroxycinnamoyl transferase (the genes numbered 4 and marked in red in Fig. B), chalcone synthase (the genes numbered 5 and marked in red in Fig. B), and isoflavone 7-O-glucose-6 ‘’- O-malonyltransferase (Phvul.008G029400.v2.1, Phvul.008G032200.v2.1), and exhibited a consistent trend of change in BBL beans and WY beans. The gene encoding phenylalanine ammonia-lyase (Phvul.001G177700.v2.1, Phvul.001G177800.v2.1) displayed divergent trends of change between BBL and WY beans. Exposure to cold stress for 6 h significantly upregulated the expression of the PAL gene in cold-sensitive bean BBL. In addition, cold stress for 24 h significantly upregulated the expression of genes encoding 5-O - (4-coumaroyl) - D-quinoate 3 ‘- monooxygenase and chalcone synthase (the genes numbered 6–7 and marked in red in Fig. B), 2-hydroxyisoflavone synthase (Phvul.003G074000.v2.1, Phvul.003G051800.v2.1, Phvul.003G051801.v2.1, Phvul.003G051700.v2.1), and flavonoid 3’ – monooxygenase (Phvul.L001623.v2.1) in WY beans (Fig. B). These findings suggest that common beans may alleviate the effects of cold stress on plants by modulating the expression of flavonoids and genes encoding flavonoid synthesis. The effect of cold stress on hormone synthesis and transduction in common beans Plant hormones play an important role in mediating response to cold stress. In this study, we identified three DEMs and 18 DEGs implicated in the biosynthesis pathways of brassinolide (BR), abscisic acid (ABA), jasmonic acid (JA), and salicylic acid (SA) (Fig. ). In the BR pathway, the expression levels of Phvul.001G075500.v2.1, Phvul.002G318200.v2.1, Phvul.003G187200.v2.1, Phvul.003G247400.v2.1, Phvul.006G033300.v2.1, Phvul.002G047200.v2.1, and Phvul.003G143332.v2.1 significantly increased with the duration of cold stress. The expression levels of Phvul.008G186900.v2.1 and Phvul.010G056200.v2.1 significantly decreased with prolonged cold stress duration. In addition, the expression levels of Phvul.003G164800.v2.1, Phvul.003G247601.v2.1, Phvul.003G247651.v2.1, and Phvul.005G074000.v2.1 exhibited distinct trends in BBL and WY common beans (Fig. B). In the ABA pathway, metabolomic analysis revealed that cold stress for 24 h increased the ABA content in the leaves of both BBL and WY bean varieties (Fig. A). During the ABA signal transduction process, we identified five DEGs encoding PYP/PYL, 10 DEGs encoding PP2C, four DEGs encoding SnRK2, and five DEGs encoding ABF. Among these, the expression level of Phvul.002G141901.v2.1 decreases with prolonged cold stress duration, while the expression levels of most other DEGs increase with increasing cold stress duration. Furthermore, Phvul.001G246300.v2.1 exhibited differential changes only in cold-sensitive BBL beans, with no significant change observed in cold-tolerant WY beans (Fig. C). In the JA pathway, the biosynthesis of JA is achieved through the linolenic acid metabolic pathway, which is synthesized through the catalysis of 13-HPOT and OPDA enzymes. The metabolic results indicate that the content of α- linolenic acid only significantly decreased in cold-sensitive bean BBL, while the variation was minor in the cold-tolerant bean WY (Fig. A). During the JA signal transduction process, two DEGs encoding JAR1, one DEG encoding COI1, four DEGs encoding JAZ, and 11 DEGs encoding MYC2 were identified. Among them, the expression level of Phvul.003G231000.v2.1 decreased with prolonged low-temperature treatment duration. Moreover, the expression levels of Phvul.009G225300.v2.1 and Phvul.006G198400.v2.1 decreased with increasing low-temperature treatment time (Fig. C). In the SA pathway, the biosynthesis of SA is catalyzed by phenylalanine ammonia-lyase (PAL) through the phenylalanine metabolism pathway. The metabolomics results revealed a significant increase in the content of L-phenylalanine with prolonged cold stress duration, and this trend of change was similar in both cold-sensitive BBL beans and cold-tolerant WY beans. The content of SA in BBL beans increases with prolonged low-temperature treatment time, whereas in WY common beans, it exhibits a trend of initially increasing and then decreasing with prolonged low-temperature treatment time (Fig. A). During the SA signal transduction process, two DEGs encoding NPR1, two DEGs encoding TGA, and three DEGs encoding PR-1 were identified. Among them, the expression level of DEGs encoding NPR1 increased with prolonged low-temperature treatment duration. The expression level of DEGs encoding TGA decreases with prolonged low-temperature treatment duration. In addition, the expression levels of DEGs Phvul.003G148550.v2.1 and Phvul.006G197500.v2.1, encoding PR-1, increased with prolonged low-temperature treatment time, while the expression level of Phvul.006G196900.v2.1 decreased with prolonged low-temperature treatment time (Fig. C).
The phenotypes and associated physiological indices of the cold-sensitive variety ‘Bai Bu Lao’ (BBL) and the cold-tolerant variety ‘Wei Yuan’ (WY) were analyzed following exposure to 5 °C for 0, 6, and 24 h. As shown in Fig. A, under normal temperature conditions (0 h), the two bean varieties had no significant phenotypic differences. After 6 h of exposure to 5 °C, all leaves of the BBL beans wilted and drooped, whereas only some leaves of the WY beans exhibited wilting and drooping. The extent of cold damage was significantly greater in BBL beans compared to WY beans. After 24 h of exposure to 5 °C, the leaves of the BBL beans continued to droop and gradually became wrinkled, showing clear signs of dehydration. In contrast, the leaves of the WY beans ceased drooping and began to flatten, gradually resuming growth. Furthermore, the physiological indices of BBL and WY beans under low-temperature stress were measured. After 24 h of exposure to 5 °C, compared to the 0 h time point, the relative conductivity of BBL and WY beans increased by 79.08% and 108.34% ( P < 0.05) (Fig. B), the malondialdehyde content increased by 51.54% and 85.76% (Fig. C), and the SOD activity increased by 16.72% and 33.86% ( P < 0.05), respectively (Fig. D). Compared with the 0 h time point, exposure to 5 °C for 6 h and 24 h significantly increased the relative conductivity, malondialdehyde content, SOD activity, and POD activity in both BBL and WY beans (Fig. C, D, E). Notably, the increases in these physiological indices were more significant in WY beans than in BBL beans. Collectively, these results indicate a difference in cold tolerance between BBL and WY beans, with WY beans demonstrating greater cold-tolerance than BBL beans.
Transcription group sequencing of BBL and WY beans under low-temperature treatment for 0 h, 6 h, and 24 h was performed using a high-flow sequencing platform. Each sample was analyzed in triplicate at each time point, resulting in 18 samples. As shown in Table , sequencing quality control yielded a total of 114.91 Gb of Clean Data, with a Q30 alkali ratio of ≥ 93.40% for each sample and a ratio between 91.43% and 96.44% for reads and the reference genome. This demonstrates that the output and quality of the archived sequencing data meet the requirements for further analysis and are suitable for subsequent bioinformatics analysis. At the same time, 12 DEGs were randomly selected for RT-qPCR analysis across the six treatments (BBL0, BBL6, BBL24, WY0, WY6, and WY24) to verify the reliability of the transcriptome data. Although some differences in gene expression were observed, as shown in Figure , the RT-qPCR results for 11 DEGs were consistent with the transcriptome expression trends, indicating that the transcriptome data from this study is reliable and suitable for further analysis.
To investigate the effect of cold stress on the metabolic levels of different common bean varieties, we conducted qualitative and quantitative analyses of metabolites in the 18 samples using high-throughput and extensive targeted detection technology. A total of 923 metabolites were detected, including carboxylic acids and derivatives (12.2%), organoxygen compounds (8.5%), flavonoids (4.4%), prenol lipids (4.4%), fatty acyls (4.3%), benzene and substituted derivatives (2.3%), pure nucleosides (1.5%), and stenides (1.5%), rods and steroid derivatives (1.5%), phenols (1.4%), organitogen compounds (1.3%), and others (58.1%) (Fig. A). In the clustering heatmap analysis, BBL and WY beans, along with their respective biological replicates for each treatment, clustered together, suggesting high repeatability and correlation within the output data (Fig. B). The heat map results further reveal specific accumulation of certain metabolites in BBL beans, while others exhibit accumulation exclusively in WY beans. In addition, under cold stress, disparities in metabolite accumulation patterns between the two genotypes suggest that variations in cold tolerance within the common bean species may stem from the accumulation of DEMs. Moreover, PCA analysis was conducted on the dataset comprising 923 metabolites. The PCA score plot indicates that the first principal component (PC1) and the second principal component (PC2) account for 24.3% and 22.1% of the total variance, respectively (Fig. C). The cumulative variance explained by PC1 to PC5 is 80% (Fig. D). The close clustering of duplicate and mixed samples suggests the reproducibility and reliability of this experiment. Additionally, a Variable Importance in Projection (VIP) analysis based on the PLS-DA model was performed to identify the most informative metabolites that differentiate the two contrasting common bean varieties. The results indicated that the PLS-DA analysis effectively separated the metabolites of the two bean varieties and their three different treatment periods along the t1 and t2 axes (Figure A). The permutation test of the PLS-DA model also demonstrated a good independence between the training and testing datasets (Figure B). Analysis of the VIP values for all metabolites revealed that 428 metabolites had VIP values greater than 1 (Table ). The top twenty metabolites based on their VIP values are listed in the figure (Figure C).
This experiment established seven groups for comparison, including intra-group comparisons of BBL and WY and inter-group comparisons of BBL and WY. Initially, the study scrutinized the influence of cold stress on different common bean varieties. In the cold-sensitive common bean BBL, compared to the 0 h time point, 1923 upregulated and 1937 downregulated DEGs were identified after 6 h of low-temperature treatment (Fig. A). In addition, there were 177 upregulated and 192 downregulated DEMs in BBL (Fig. B). The KEGG joint enrichment analysis revealed significant enrichment of DEGs in pathways such as Isoflavonoid biosynthesis, Flavonoid biosynthesis, beta-Alanine metabolism, valine, leucine and isoleucine degradation, and phenylpropanoid biosynthesis pathways. Additionally, DEMs were significantly enriched in the Lysine degradation and Star and cross-metabolism pathways (Fig. C, Figure ). The GO enrichment analysis of DEGs identified annotations of 19 biological processes, three cellular components, and 12 molecular functions (Fig. G). After 24 h of low-temperature treatment, there are 3487 upregulated and 3489 downregulated DEGs in BBL (Fig. A). In addition, there were 252 upregulated and 231 downregulated DEMs in BBL (Fig. B). The KEGG joint enrichment analysis revealed significant enrichment of DEGs in pathways such as Plant hormone signal transduction, Flavonoid biosynthesis, Isoflavonoid biosynthesis, Galactose metabolism, Benzoxazinoid biosynthesis, Zeatin biosynthesis, and Phenolpropanoid biosynthesis. On the other hand, DEMs were significantly enriched in Cyanoamino acid metabolism and valine, Leucine, and isoleucine degradation (Fig. D, Figure ). The GO enrichment analysis for DEGs identified annotations for 20 biological processes, three cellular components, and 13 molecular functions (Figure A). In the cold-resistant common bean WY, compared to the 0 h time point, 1290 upregulated and 1359 downregulated DEGs were identified after 6 h of low-temperature treatment (Fig. A). Additionally, there were 180 upregulated and 244 downregulated DEMs in WY (Fig. B). The KEGG joint enrichment analysis revealed significant enrichment of DEGs in pathways such as Flavonoid biosynthesis, Plant hormone signal translation, Stilbenoid, diarylheptanoid and ginger biosynthesis, Flavone and flavonol biosynthesis, and Phenolpropanoid biosynthesis. DEMs were significantly enriched in Glyoxylate and dicarboxylate metabolism (Fig. E, Figure ). The GO enrichment analysis of DEGs identified annotations for 20 biological processes, three cellular components, and 12 molecular functions (Figure A). After 24 h of low-temperature treatment, there were 3762 upregulated and 3566 downregulated DEGs in WY (Fig. A). In addition, there were 182 upregulated and 256 downregulated DEMs in WY (Fig. B). The KEGG combined enrichment analysis revealed significant enrichment of DEGs in pathways such as isoflavone biosynthesis, Isoflavonoid biosynthesis, Plant hormone signal translation, Flavonoid biosynthesis, Galactose metabolism, Flavone and Flavonol biosynthesis, Carotenoid biosynthesis, valine, leucine and isoleucine degradation, and Glutathione metabolism (Fig. F, Figure ). Furthermore, the GO enrichment analysis of DEGs identified annotations for 20 biological processes, three cellular components, and 13 molecular functions (Figure B). Integrating the enriched metabolic pathways with differential genes and metabolites after low-temperature treatment in BBL and WY, most of the pathways are associated with sugar and acid metabolism, amino acid metabolism, flavonoid biosynthesis metabolism, and other metabolic pathways. Moreover, pathways such as plant hormone signal transduction, photosynthesis and starch, and sucrose metabolism were also identified. Notably, no significant difference was observed in GO enrichment between BBL and WY common beans after low-temperature treatment.
Furthermore, we compared the effects of cold stress on two common bean varieties with differing cold tolerance. At 0 h, the cold-tolerant variety WY had 937 upregulated DEGs and 1170 down-regulated DEGs compared to the cold-sensitive bean variety BBL (Fig. A). There were also 229 upregulated DEMs and 196 down-regulated DEMs (Fig. B). KEGG pathway enrichment analysis revealed that DEGs were significantly enriched in pathways related to galactose metabolism, ascorbate and aldarate metabolism, valine, leucine, and isoleucine degradation, plant hormone signal transduction, isoquinoline alkaloid biosynthesis, and phenylalanine metabolism (Figure A, B). DEMs were significantly enriched in pathways such as fatty acid degradation, isoflavonoid biosynthesis, and cyanoamino acid metabolism (Figure A, B). After 6 h of low-temperature treatment, there were 1396 upregulated DEGs and 1384 downregulated DEGs in BBL compared to WY (Fig. A). Additionally, BBL had 249 upregulated DEMs and 240 downregulated DEMs compared to WY (Fig. B). The KEGG pathway enrichment analysis showed that DEGs were significantly enriched in pathways related to flavonoid biosynthesis, isoflavonoid biosynthesis, phenylpropanoid biosynthesis, amino sugar and nucleotide sugar metabolism, ascorbate and aldarate metabolism, galactose metabolism, and starch and sucrose metabolism. DEMs were significantly enriched in pathways such as isoflavonoid biosynthesis, isoquinoline alkaloid biosynthesis, and alpha-linolenic acid metabolism (Fig. A, Figure C). After 24 h of low-temperature treatment, there were 995 upregulated DEGs and 1163 downregulated DEGs in BBL compared to WY (Fig. A); There were also 194 upregulated DEMs and 282 downregulated DEMs in BBL compared to WY (Fig. B). The KEGG pathway enrichment analysis revealed significant enrichment of DEGs in pathways such as galactose metabolism, ascorbate and aldarate metabolism, amino sugar and nucleotide sugar metabolism, alpha-linolenic acid metabolism, isoflavonoid biosynthesis, plant hormone signal transduction, phenylpropanoid biosynthesis, flavonoid biosynthesis, and pentose and glucuronate interconversions (Fig. B, Figure D). DEMs were significantly enriched in pathways such as ascorbate and aldarate metabolism, isoflavonoid biosynthesis, and valine, leucine, and isoleucine degradation (Fig. B, Figure D). These findings indicate that at 6 h of low-temperature treatment, the cold-sensitive BBL and cold-tolerant WY common bean varieties exhibited the highest number of DEGs and DEMs. These were significantly enriched in pathways associated with flavonoid biosynthesis, sugar acid metabolism, amino acid metabolism, and plant hormone signal transduction. Therefore, in subsequent experiments, our focus was directed toward analyzing the response of common beans to low-temperature stress, specifically examining sugars and amino acids, flavonoid secondary metabolites, plant hormones, and associated genes. The DEGs petalogram results show that following low-temperature treatment, there was one upregulated DEG, Phvul.007G135400 (Fig. C), shared across all comparison groups of the cold-sensitive BBL cowpea and cold-tolerant WY common bean. There were also 25 downregulated DEGs (Fig. D). Moreover, the DEMs petalogram results reveal that following low-temperature treatment, three upregulated DEMs (Fig. D) were shared among all comparison groups of BBL and WY common bean, namely L-Tryptophan, Vasicinol, and Shizukaol D. Notably, there were eight down-regulated DEMs (Fig. E).
Transcription factors play a crucial regulatory role in the response mechanism to cold stress. To elucidate this regulatory network, transcription factor prediction was performed on all DEGs (11212) identified from transcriptome sequencing. The predicted results showed that the top six transcription factor families with the most annotations were RLK-Pelle-DLSV (93), AP2/ERF-ERF (76), bHLH (71), MYB (64), WRKY (56), and NAC (54), respectively (Fig. A). Most of the DEGs in transcription factors are implicated in the response to low-temperature stress, including bHLH and MYB families, which have been previously reported to regulate plant cold tolerance . The bHLH transcription factor plays a multifaceted role, regulating physiological and biochemical processes such as signal transduction. In addition, it is actively involved in modulating various stress responses, including those to drought and cold stress. This study identified 71 bHLH gene family transcription factors from the pool of DEGs (Fig. B). Among them, the cold stress treatment for 24 h significantly upregulated the expression of six bHLH transcription factors (Phvul.003G181900.v2.1, Phvul.002G018300.v2.1, Phvul.003G140800.v2.1, Phvul.002G283100.v2.1, Phvul.001G121200.v2.1, and Phvul.003G157100.v2.1) in cold-sensitive BBL beans. Cold stress for 6 h significantly upregulated the expression of six bHLH transcription factors (Phvul.009G137400.v2.1, Phvul.010G120000.v2.1, Phvul.002G017600.v2.1, Phvul.006G196600.v2.1, Phvul.001G085500.v2.1, and Phvul.002G088400.v2.1) in cold-tolerant WY beans. Simultaneously, low-temperature treatment for 24 h significantly downregulated the expression of five bHLH transcription factors (Phvul.003G067500.v2, Phvul.006G028500.v2.1, Phvul.006G184600.v2.1, Phvul.001G031300.v2.1, and Phvul.009G023500.v2.1) in cold-sensitive BBL and cold-tolerant WY beans. In addition, Phvul.001G023200.v2.1 and Phvul.003G231200.v2.1 exhibited contrasting trends between cold-sensitive BBL and cold-tolerant WY beans under cold stress conditions (Fig. B). The MYB transcription factor family is one of the largest in plants, and numerous research findings support its potential as a transcription factor for plant breeding and enhancement. In this study, 64 MYB gene family transcription factors were identified from the pool of DEGs (Fig. C). Among them, low-temperature treatment for 24 h significantly downregulated the expression of six MYB transcription factors (Phvul.002G170500.v2.1, Phvul.010G009900.v2.1, Phvul.003G176800.v2.1, Phvul.008G262900.v2.1, and Phvul.009G187700.v2.1) in cold-sensitive BBL and cold-tolerant WY beans, while significantly upregulating the expression of 25 MYB transcription factors in both varieties. At the same time, the changing trends of Phvul.007G108500-v2.1, Phvul.008G041500-v2.1, and Phvul.007G215800-v2.1 in BBL and WY beans after 24 h of low-temperature treatment are inconsistent.
Previous research has demonstrated that low-temperature stress significantly impacts sugar metabolism, amino acid synthesis, and metabolic pathways in plants . Therefore, we integrated DEGs and DEMs to construct pathway diagrams for sugar and amino acid metabolism. As shown in Fig. , low-temperature stress significantly increased the levels of galactinol, sucrose, melibiose, trehalose, and maltose in both cold-sensitive BBL common beans and cold-tolerant WY cowpeas, while it decreased the level of fructose-6P. Additionally, the expression patterns of raffinose, manninotriose, D-galactose, D-fructose, glucose, and glucose-6P differed inconsistently between cold-sensitive BBL common beans and cold-tolerant WY cowpeas (Fig. A). Concurrently, the expression levels of genes encoding myo-inositol galactosyltransferase (Phvul.001G223700.v2.1, Phvul.007G203400.v2.1), cotton galactose synthase (Phvul.004G007100.v2.1), hexokinase (Phvul.010G144900.v2.1), fructokinase (Phvul.011G091600.v2.1), pyruvate kinase (Phvul.004G000800.v2.1), phosphofructokinase/phosphatase (Phvul.003G150400.v2.1, Phvul.008G172400.v2.1, Phvul.009G054400.v2.1), α-amylase (Phvul.006G185000.v2.1), and β-amylase (Phvul.003G226900.v2.1) also showed a marked increase (Fig. B). Upon entry into the citric acid cycle, low-temperature stress significantly reduced the levels of 2-ketoglutaric acid in cold-sensitive BBL common beans and cold-tolerant WY common beans and influenced the aconitic acid content. The aconitic acid content decreased significantly with prolonged exposure to low temperature in BBL common beans, whereas in WY common beans, it initially decreased and then increased (Fig. A). Additionally, the expression levels of genes encoding isocitrate dehydrogenase (Phvul.002G114600.v2.1, Phvul.007G150400.v2.1) increased significantly with prolonged low-temperature treatment (Fig. B). Under low-temperature stress conditions, the levels of certain amino acids and their downstream metabolites, including proline, L-leucine, L-valine, tyrosine, D-aspartic acid, and L-aspartate, significantly increased in both cold-sensitive BBL common beans and cold-tolerant WY common beans. The levels of L-lysine, glutamine, and citrulline in both cold-sensitive BBL common beans and cold-tolerant WY common beans exhibited a pattern of initial increase followed by subsequent decrease with prolonged exposure to low temperature. Furthermore, the alterations in the levels of glutamate, GABA, citruline, arginine, L-isoleucine, tyramine, and L-asparagine exhibited divergent trends between cold-sensitive BBL common beans and cold-tolerant WY common beans (Fig. A). Meanwhile, the expression levels of genes encoding 2-Oxoglutarate dehydrogenase (Phvul.010G017700.v2.1, Phvul.009G156400.v2.1), isobutyryl-CoA dehydrogenase (Phvul.011G073500.v2.1), and 3-Methylcrotonyl-CoA carboxylase (Phvul.003G291600.v2.1) significantly decreased with prolonged low-temperature treatment in both cold-sensitive BBL common beans and cold-tolerant WY common beans. The expression levels of genes encoding tyrosine metabolism enzyme (Phvul.006G143500.v2.1) and aspartate aminotransferase (Phvul.001G250600.v2.1) increased with prolonged low-temperature treatment in both cold-sensitive BBL common beans and cold-tolerant WY common beans (Fig. B).
Flavonoids constitute a crucial category of secondary metabolites in plants, exerting significant biological functions. Herein, we identified 19 types of DEMs and 15 types of DEGs associated with the biosynthesis and metabolism of flavonoids in common beans. As shown in Fig. , cold stress significantly augmented the levels of L-tyrosine, L-phenylalanine, naringenin chalcone, naringenin, genistein, apigenin, apigenin 7-Glucoside, and pratenstein, while significantly diminishing the content of coumestrol and genistein 7,4’-Di-O-β-D-glucopyranoside. These substances exhibited consistent trends in both BBL and WY beans. The levels of eriodictyol, quercetin, isoquercitrin, isosakuranetin, isoliquiritigenin, 7,4 ‘- Dihydroxyflavone, trans-5-O - (p-Coumaroyl) shikimate, luteolin, and chloroiol exhibited distinct trends between BBL and WY beans (Fig. A). At the same time, cold stress significantly upregulated the expression of genes encoding shikimate O-hydroxycinnamoyl transferase (the genes numbered 4 and marked in red in Fig. B), chalcone synthase (the genes numbered 5 and marked in red in Fig. B), and isoflavone 7-O-glucose-6 ‘’- O-malonyltransferase (Phvul.008G029400.v2.1, Phvul.008G032200.v2.1), and exhibited a consistent trend of change in BBL beans and WY beans. The gene encoding phenylalanine ammonia-lyase (Phvul.001G177700.v2.1, Phvul.001G177800.v2.1) displayed divergent trends of change between BBL and WY beans. Exposure to cold stress for 6 h significantly upregulated the expression of the PAL gene in cold-sensitive bean BBL. In addition, cold stress for 24 h significantly upregulated the expression of genes encoding 5-O - (4-coumaroyl) - D-quinoate 3 ‘- monooxygenase and chalcone synthase (the genes numbered 6–7 and marked in red in Fig. B), 2-hydroxyisoflavone synthase (Phvul.003G074000.v2.1, Phvul.003G051800.v2.1, Phvul.003G051801.v2.1, Phvul.003G051700.v2.1), and flavonoid 3’ – monooxygenase (Phvul.L001623.v2.1) in WY beans (Fig. B). These findings suggest that common beans may alleviate the effects of cold stress on plants by modulating the expression of flavonoids and genes encoding flavonoid synthesis.
Plant hormones play an important role in mediating response to cold stress. In this study, we identified three DEMs and 18 DEGs implicated in the biosynthesis pathways of brassinolide (BR), abscisic acid (ABA), jasmonic acid (JA), and salicylic acid (SA) (Fig. ). In the BR pathway, the expression levels of Phvul.001G075500.v2.1, Phvul.002G318200.v2.1, Phvul.003G187200.v2.1, Phvul.003G247400.v2.1, Phvul.006G033300.v2.1, Phvul.002G047200.v2.1, and Phvul.003G143332.v2.1 significantly increased with the duration of cold stress. The expression levels of Phvul.008G186900.v2.1 and Phvul.010G056200.v2.1 significantly decreased with prolonged cold stress duration. In addition, the expression levels of Phvul.003G164800.v2.1, Phvul.003G247601.v2.1, Phvul.003G247651.v2.1, and Phvul.005G074000.v2.1 exhibited distinct trends in BBL and WY common beans (Fig. B). In the ABA pathway, metabolomic analysis revealed that cold stress for 24 h increased the ABA content in the leaves of both BBL and WY bean varieties (Fig. A). During the ABA signal transduction process, we identified five DEGs encoding PYP/PYL, 10 DEGs encoding PP2C, four DEGs encoding SnRK2, and five DEGs encoding ABF. Among these, the expression level of Phvul.002G141901.v2.1 decreases with prolonged cold stress duration, while the expression levels of most other DEGs increase with increasing cold stress duration. Furthermore, Phvul.001G246300.v2.1 exhibited differential changes only in cold-sensitive BBL beans, with no significant change observed in cold-tolerant WY beans (Fig. C). In the JA pathway, the biosynthesis of JA is achieved through the linolenic acid metabolic pathway, which is synthesized through the catalysis of 13-HPOT and OPDA enzymes. The metabolic results indicate that the content of α- linolenic acid only significantly decreased in cold-sensitive bean BBL, while the variation was minor in the cold-tolerant bean WY (Fig. A). During the JA signal transduction process, two DEGs encoding JAR1, one DEG encoding COI1, four DEGs encoding JAZ, and 11 DEGs encoding MYC2 were identified. Among them, the expression level of Phvul.003G231000.v2.1 decreased with prolonged low-temperature treatment duration. Moreover, the expression levels of Phvul.009G225300.v2.1 and Phvul.006G198400.v2.1 decreased with increasing low-temperature treatment time (Fig. C). In the SA pathway, the biosynthesis of SA is catalyzed by phenylalanine ammonia-lyase (PAL) through the phenylalanine metabolism pathway. The metabolomics results revealed a significant increase in the content of L-phenylalanine with prolonged cold stress duration, and this trend of change was similar in both cold-sensitive BBL beans and cold-tolerant WY beans. The content of SA in BBL beans increases with prolonged low-temperature treatment time, whereas in WY common beans, it exhibits a trend of initially increasing and then decreasing with prolonged low-temperature treatment time (Fig. A). During the SA signal transduction process, two DEGs encoding NPR1, two DEGs encoding TGA, and three DEGs encoding PR-1 were identified. Among them, the expression level of DEGs encoding NPR1 increased with prolonged low-temperature treatment duration. The expression level of DEGs encoding TGA decreases with prolonged low-temperature treatment duration. In addition, the expression levels of DEGs Phvul.003G148550.v2.1 and Phvul.006G197500.v2.1, encoding PR-1, increased with prolonged low-temperature treatment time, while the expression level of Phvul.006G196900.v2.1 decreased with prolonged low-temperature treatment time (Fig. C).
Cold stress represents a significant abiotic stress leading to reduced yield and quality in leguminous plants. Studies have demonstrated that integrating transcriptomics and metabolomics analyses can offer comprehensive insights into the metabolic regulation and molecular mechanisms underlying cold stress . Herein, we identified 11,837 DEGs and 923 DEMs through comparative analysis of transcriptomics and widely targeted metabolomics. Our analysis unveiled their extensive involvement in the synthesis and transduction of plant metabolites and hormones. The effects of cold stress vary in terms of phenotype and physiological indicators between two different varieties of common beans Low-temperature stress has notable impacts on the osmotic regulation system and antioxidant system of plants. In Brassica napus L. (rapeseed), low-temperature stress significantly elevated the levels and relative conductivity of malondialdehyde, alongside enhancing the activity of peroxidase and superoxide dismutase . Our research findings demonstrate that low-temperature stress significantly elevated the malondialdehyde content, relative conductivity, and antioxidant enzyme activity in two distinct varieties of common beans, and the observed increments varied between the two types of common beans. These findings are consistent with those of Cai et al. in two varieties of Solanum melongena exhibiting differential cold tolerance and Wang et al. in Brassica campestris L. with varying cold tolerance levels . Together, they underscore that distinct varieties within the same species can manifest divergent responses to low-temperature stress. The role of transcription factors in the cold response of common beans Currently, a plethora of plant transcription factors have been shown to respond to low-temperature stress in plants, primarily modulating plant cold resistance through involvement in processes such as plant cell membrane fluidity, cold signal transduction, MAPK cascade, and regulation of CBF pathways . Studies have revealed that transgenic apple callus and Arabidopsis plants overexpressing MdMYB23 exhibit heightened cold tolerance . MdMYB23 has been shown to interact with the key regulator of anthocyanin biosynthesis, the MdANR promoter, thereby activating its expression, promoting anthocyanin accumulation, and clearance of reactive oxygen species (ROS) . In addition, plant alkaline helix loop helix (bHLH) transcription factors play pivotal roles in plant growth and development, secondary metabolism, and responses to abiotic stress . In rice, the upregulation of OsbHLH1 is specifically triggered by cold stress, suggesting its involvement in the cold signaling pathway of rice . In this study, a comprehensive analysis identified a total of 71 bHLH and 64 MYB gene family transcription factors (Fig. ). Interestingly, we observed a dynamic regulation of gene expression levels within these families in response to increasing duration of low-temperature treatment, while the expression levels of certain transcription factors decreased with increasing low-temperature treatment time. These findings suggest that the bHLH and MYB gene family transcription factors might exert either a positive or negative regulation on the cold tolerance of common beans. However, further validation is warranted to corroborate these observations. The effect of low-temperature stress on sugar acid metabolism and amino acid metabolism in common beans Research has demonstrated the crucial role of soluble sugars, such as galactose, starch, pyruvate, and amino acids, in augmenting plant tolerance to abiotic stress . A previous study found that the levels of soluble starch, fructose, glucose, and phenolic substances in citrus leaves undergo significant elevation after cold stress, underscoring the potential importance of sugar and secondary metabolism pathways in citrus responses to cold stress . Research by Zhao et al. suggests that overexpression of β- amylase (PbrBAM3) in Pyrus betulaefolia facilitates starch degradation under cold stress, thereby enhancing cold tolerance . In our study, we observed a significant reduction in the levels of galactinol, sucrose, melibiose, trehalose, maltose, and fructose-6P in common beans under cold stress (Fig. ). This finding suggests that common beans may enhance their cold tolerance by modulating the levels of sugar and acid substances. Furthermore, amino acids are crucial for protein synthesis, and previous studies have highlighted the pivotal role of amino acid metabolism in enhancing plant abiotic stress tolerance . Several amino acids, including proline, arginine, asparagine, glutamine, and GABA, are synthesized at high abundance under abiotic stress conditions, serving various functions such as maintaining compatible osmotic pressure, acting as precursors of secondary metabolites or serving as storage forms of organic nitrogen . Among them, proline is an osmotic regulator and is crucial in mediating plant responses to abiotic stress, especially extreme temperature stress . Our findings indicate that cold stress significantly elevates the content of Proline, L-Leucine, L-Valine, Tyrosine, D-Aspartic Acid, and L-Aspartate in bean leaves, suggesting that beans may alleviate cold stress through regulation of amino acid content. Effects of cold stress on secondary metabolite flavonoids in common beans Flavonoids play a significant role in the secondary metabolism of plants, particularly in their response to cold stress . Our study identified 19 DEMs and 15 DEGs associated with the biosynthesis and metabolism of flavonoids in common beans. Transcriptomic and metabolomic analysis of two peach trees with differing cold tolerance revealed that certain secondary metabolites, including phenolic acids and flavonoids, were exclusively upregulated in cold-sensitive peach trees . In Galega officinalis , cold stress enhances the production of phenolic compounds, including flavonoids, apigenin, coumaric acid, genistein, luteolin, trans ferulic acid, and naringenin . Combined transcriptomic and metabolomic analysis of Fagopyrum tataricum at different altitudes revealed that cold stress significantly upregulated phenylpropanoid biosynthesis and promoted the expression of anthocyanins . Plant isoflavones are naturally occurring plant estrogens belonging to the flavonoid class and cannot be synthesized in the human body . A previous study found that low-temperature treatment significantly increased the content of phenolic acids and isoflavones (genistein, daidzein, and genistein) in soybean roots, with the largest increase observed in genistein after 24 h of treatment at 10 ° C . Our findings indicate that low-temperature stress significantly increases the content of isoflavone metabolites, such as genistein, and isoflavone synthesis precursors, including L-phenylalanine, naringenin chalcone, and naringenin. The expression trends in cold-tolerant WY and cold-sensitive BBL common beans are consistent (Fig. 10), suggesting that common beans may alleviate low-temperature stress on plants by upregulating isoflavone substances. The effect of cold stress on plant hormones in common beans Furthermore, plant hormones are pivotal in co-regulating plant secondary metabolism and enhancing resistance against abiotic stress-induced damage in plants . The accumulation of anthocyanins, a secondary metabolite in plants, is influenced by various abiotic stresses, such as high light intensity, cold, drought, salinity, nutrient deficiency, and heavy metal stress, as well as the induction of endogenous plant hormones . By studying the adaptation mechanism of corn to cold stress, it was found that gibberellins play a regulatory role in the accumulation of anthocyanins induced by low temperatures . During cold stress, transgenic rice lines overexpressing OsABA8ox1 , a gene involved in abscisic acid synthesis, exhibited reduced ABA content and enhanced seedling vitality. This suggests that maintaining low levels of ABA during cold stress can promote seedling vigor . In Arabidopsis, treatment with epibrassinolide solution (EBR)enhances the tolerance of seedlings to both drought and cold stress . Research has also revealed that rice-specific microRNA miR1320 targets the ethylene-responsive transcription factor OsERF096, thereby regulating cold stress tolerance by suppressing the JA-mediated cold signaling pathway . This study identified three DEMs and 18 DEGs involved in the biosynthesis pathways of BR (brassinolide), ABA (abscisic acid), JA (jasmonic acid), and SA (salicylic acid) in common beans. We also found that the content of SA in cold-sensitive common bean BBL increased with prolonged exposure to low temperatures. Conversely, in the cold-resistant WY common bean variety, SA content exhibited an initial increase followed by a decrease with prolonged low-temperature treatment (Fig. A). These findings suggest that common beans may alleviate oxidative damage induced by low temperatures by regulating endogenous SA levels. However, variations exist among varieties with differing levels of cold tolerance.
Low-temperature stress has notable impacts on the osmotic regulation system and antioxidant system of plants. In Brassica napus L. (rapeseed), low-temperature stress significantly elevated the levels and relative conductivity of malondialdehyde, alongside enhancing the activity of peroxidase and superoxide dismutase . Our research findings demonstrate that low-temperature stress significantly elevated the malondialdehyde content, relative conductivity, and antioxidant enzyme activity in two distinct varieties of common beans, and the observed increments varied between the two types of common beans. These findings are consistent with those of Cai et al. in two varieties of Solanum melongena exhibiting differential cold tolerance and Wang et al. in Brassica campestris L. with varying cold tolerance levels . Together, they underscore that distinct varieties within the same species can manifest divergent responses to low-temperature stress.
Currently, a plethora of plant transcription factors have been shown to respond to low-temperature stress in plants, primarily modulating plant cold resistance through involvement in processes such as plant cell membrane fluidity, cold signal transduction, MAPK cascade, and regulation of CBF pathways . Studies have revealed that transgenic apple callus and Arabidopsis plants overexpressing MdMYB23 exhibit heightened cold tolerance . MdMYB23 has been shown to interact with the key regulator of anthocyanin biosynthesis, the MdANR promoter, thereby activating its expression, promoting anthocyanin accumulation, and clearance of reactive oxygen species (ROS) . In addition, plant alkaline helix loop helix (bHLH) transcription factors play pivotal roles in plant growth and development, secondary metabolism, and responses to abiotic stress . In rice, the upregulation of OsbHLH1 is specifically triggered by cold stress, suggesting its involvement in the cold signaling pathway of rice . In this study, a comprehensive analysis identified a total of 71 bHLH and 64 MYB gene family transcription factors (Fig. ). Interestingly, we observed a dynamic regulation of gene expression levels within these families in response to increasing duration of low-temperature treatment, while the expression levels of certain transcription factors decreased with increasing low-temperature treatment time. These findings suggest that the bHLH and MYB gene family transcription factors might exert either a positive or negative regulation on the cold tolerance of common beans. However, further validation is warranted to corroborate these observations.
Research has demonstrated the crucial role of soluble sugars, such as galactose, starch, pyruvate, and amino acids, in augmenting plant tolerance to abiotic stress . A previous study found that the levels of soluble starch, fructose, glucose, and phenolic substances in citrus leaves undergo significant elevation after cold stress, underscoring the potential importance of sugar and secondary metabolism pathways in citrus responses to cold stress . Research by Zhao et al. suggests that overexpression of β- amylase (PbrBAM3) in Pyrus betulaefolia facilitates starch degradation under cold stress, thereby enhancing cold tolerance . In our study, we observed a significant reduction in the levels of galactinol, sucrose, melibiose, trehalose, maltose, and fructose-6P in common beans under cold stress (Fig. ). This finding suggests that common beans may enhance their cold tolerance by modulating the levels of sugar and acid substances. Furthermore, amino acids are crucial for protein synthesis, and previous studies have highlighted the pivotal role of amino acid metabolism in enhancing plant abiotic stress tolerance . Several amino acids, including proline, arginine, asparagine, glutamine, and GABA, are synthesized at high abundance under abiotic stress conditions, serving various functions such as maintaining compatible osmotic pressure, acting as precursors of secondary metabolites or serving as storage forms of organic nitrogen . Among them, proline is an osmotic regulator and is crucial in mediating plant responses to abiotic stress, especially extreme temperature stress . Our findings indicate that cold stress significantly elevates the content of Proline, L-Leucine, L-Valine, Tyrosine, D-Aspartic Acid, and L-Aspartate in bean leaves, suggesting that beans may alleviate cold stress through regulation of amino acid content.
Flavonoids play a significant role in the secondary metabolism of plants, particularly in their response to cold stress . Our study identified 19 DEMs and 15 DEGs associated with the biosynthesis and metabolism of flavonoids in common beans. Transcriptomic and metabolomic analysis of two peach trees with differing cold tolerance revealed that certain secondary metabolites, including phenolic acids and flavonoids, were exclusively upregulated in cold-sensitive peach trees . In Galega officinalis , cold stress enhances the production of phenolic compounds, including flavonoids, apigenin, coumaric acid, genistein, luteolin, trans ferulic acid, and naringenin . Combined transcriptomic and metabolomic analysis of Fagopyrum tataricum at different altitudes revealed that cold stress significantly upregulated phenylpropanoid biosynthesis and promoted the expression of anthocyanins . Plant isoflavones are naturally occurring plant estrogens belonging to the flavonoid class and cannot be synthesized in the human body . A previous study found that low-temperature treatment significantly increased the content of phenolic acids and isoflavones (genistein, daidzein, and genistein) in soybean roots, with the largest increase observed in genistein after 24 h of treatment at 10 ° C . Our findings indicate that low-temperature stress significantly increases the content of isoflavone metabolites, such as genistein, and isoflavone synthesis precursors, including L-phenylalanine, naringenin chalcone, and naringenin. The expression trends in cold-tolerant WY and cold-sensitive BBL common beans are consistent (Fig. 10), suggesting that common beans may alleviate low-temperature stress on plants by upregulating isoflavone substances.
Furthermore, plant hormones are pivotal in co-regulating plant secondary metabolism and enhancing resistance against abiotic stress-induced damage in plants . The accumulation of anthocyanins, a secondary metabolite in plants, is influenced by various abiotic stresses, such as high light intensity, cold, drought, salinity, nutrient deficiency, and heavy metal stress, as well as the induction of endogenous plant hormones . By studying the adaptation mechanism of corn to cold stress, it was found that gibberellins play a regulatory role in the accumulation of anthocyanins induced by low temperatures . During cold stress, transgenic rice lines overexpressing OsABA8ox1 , a gene involved in abscisic acid synthesis, exhibited reduced ABA content and enhanced seedling vitality. This suggests that maintaining low levels of ABA during cold stress can promote seedling vigor . In Arabidopsis, treatment with epibrassinolide solution (EBR)enhances the tolerance of seedlings to both drought and cold stress . Research has also revealed that rice-specific microRNA miR1320 targets the ethylene-responsive transcription factor OsERF096, thereby regulating cold stress tolerance by suppressing the JA-mediated cold signaling pathway . This study identified three DEMs and 18 DEGs involved in the biosynthesis pathways of BR (brassinolide), ABA (abscisic acid), JA (jasmonic acid), and SA (salicylic acid) in common beans. We also found that the content of SA in cold-sensitive common bean BBL increased with prolonged exposure to low temperatures. Conversely, in the cold-resistant WY common bean variety, SA content exhibited an initial increase followed by a decrease with prolonged low-temperature treatment (Fig. A). These findings suggest that common beans may alleviate oxidative damage induced by low temperatures by regulating endogenous SA levels. However, variations exist among varieties with differing levels of cold tolerance.
This study elucidated the impact of cold stress on physiological parameters, gene expression, and metabolite profiles of common bean seedlings through an integrated analysis encompassing phenotypic physiology, transcriptomics, and metabolomics. The results suggest that under cold stress, the DEGs and DEMs of common beans are engaged in primary metabolism, secondary metabolism, and plant hormone signal transduction, especially in synthesizing secondary metabolites such as isoflavones. Furthermore, it was found that bHLH and MYB transcription factors exhibit extensive involvement in the cold response of common beans. In summary, this study offers valuable insights for a more comprehensive understanding of the cold resistance mechanism in common beans and the exploration of cold-tolerant bean germplasm resources.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Panoramic Radiographic Analysis of Age- and Sex-related Variations in Upper Mandibular Morphology: Focus on the Condyle, Sigmoid Notch, and Coronoid Process | 496348f1-7f95-4d74-a1d7-d0b6e936f0b3 | 11705095 | Dentistry[mh] | Study design . This cross-sectional study analyzed panoramic radiographs to investigate morphological variations of the upper mandible, focusing on the right (RPC) and left (LPC) condylar processes, right (RSN) and left (LSN) sigmoid notches, and right (RPCO) and left (LPCO) coronoid processes (RPCO and LPCO). The aim was to evaluate these structures’ variations by age and sex in a sample population. Sample population. A total of 150 individuals, aged between 18 and 80 years, participated in the study, with a mean age of 46.17±22.48 years. The cohort consisted of 89 males (59.3%) and 61 females (40.7%). Participants were divided into two age groups: younger than 46 years (n=82) and older than 46 years (n=68) as well as by sex. Inclusion and exclusion criteria . The inclusion criteria for this study were as follows: 1) Individuals aged 18 to 80 years. 2) Availability of clear, recent panoramic radiographs with no significant image distortion. 3) No history of prior surgical interventions affecting the mandible. 4) No known congenital mandibular anomalies or fractures. 5) No clinical signs or history of TMJ dysfunction at the time of radiographic imaging. Exclusion criteria included individuals with severe dental malformations ( i.e ., edentulous jaws, multiple missing teeth), previous mandibular trauma, systemic conditions affecting bone morphology ( i.e., osteoporosis), or any radiographs showing technical artifacts that compromised the visibility of the mandibular structures. Individuals with incomplete radiographs or those lacking visibility of the RPC, LPC, RPCO or LPCO were also excluded. Radiographic analysis . Panoramic radiographs were obtained using standardized radiographic techniques in a clinical setting. All images were captured using the same model of panoramic radiographic machine (Planmeca ProMax, Helsinki, Finland), which was calibrated prior to each imaging session to ensure consistent image quality and magnification. Images were obtained with the patient’s head in a natural upright position, ensuring that the Frankfort horizontal plane was parallel to the floor during image capture. Panoramic radiographs were obtained using standard procedures, and morphological variations of the RPC, LPC, RSN, LSN, RPCO, and LPCO were classified based on shapes—round, flat, diamond, convex, sloping, wide, triangular, or beak . These characteristics were analyzed using standardized criteria, and their relative distributions were compared between the age and sex groups. Each of these shapes was determined by visual assessment and digital caliper measurements to quantify their dimensions where necessary. All radiographic measurements were obtained by two independent observers (FD, OS), both experienced in panoramic imaging interpretation, to ensure inter-observer reliability. Discrepancies between observers were resolved through joint discussion and review of the images. Ethics approval and consent to participate . This study was conducted in accordance with the principles of the Declaration of Helsinki. Ethical approval was waived by the clinical Ethics Committee (IRB). All the procedures/diagnostics performed were part of the routine care. Informed consent was waived by the clinical ethical board due to the retrospective nature of the study. Statistical analysis . Data are presented as means and standard deviations for continuous variables, and as counts and percentages for categorical variables. Comparisons between groups were made using chi-square tests, and p -values were calculated to determine statistical significance. Data were recorded in a structured database and subjected to both descriptive and inferential statistical analyses. Continuous variables, such as age, were reported as means and standard deviations, while categorical variables, such as the shape of the condylar processes, sigmoid notches, and coronoid processes, were presented as counts and percentages. Comparisons between groups (age <46 years vs . age ≥46 years; male vs . female) were performed using chi-square tests for categorical variables, such as the distribution of morphological shapes, and t -tests for continuous variables where appropriate. For all statistical tests, a p -value of less than 0.05 was considered indicative of statistical significance.
Baseline characteristics . The study included 150 participants with a mean age of 46.17±22.48 years . The cohort comprised 89 males (59.3%) and 61 females (40.7%) (Table I). Participants were grouped into two age categories: younger than 46 years (n=82) and older than 46 years (n=68), as well as by sex (Table I, , and ). Age-specific morphological variations. Analysis of the mandibular structures revealed significant age-related variations, particularly in the RPC and LPC (Table II). Younger participants (<46 years) showed a significantly higher frequency of round-shaped RPC (59.8%) compared to older individuals (41.2%) with a p -value of 0.023 (Table II, ). The occurrence of flat-shaped RPC was more common in the older age group, with 29.4% of older participants displaying this feature compared to only 3.7% in the younger cohort ( p <0.001) (Table II, Figure 4). This trend was also observed in LPC, where round shapes were more frequent in the younger group (63.4%) compared to older participants (36.8%), while flat LPC shapes were predominant in older individuals (36.8%, p <0.001) (Table II, Figure 4). In addition to these differences in the condylar processes, age-related variations were also noted in the sigmoid notches (RSN and LSN) (Table II). The sloping shape of the RSN was more commonly found in younger participants, with 19.5% of the younger group displaying this shape, compared to 10.3% in the older cohort (Table II). Conversely, wide RSN shapes were more frequently observed in the older age group (51.5% vs . 36.6%, p =0.056) (Table II). Similar trends were found in the LSN, where sloping shapes were more common in younger individuals (22.0%), and wide shapes were more prevalent in the older group (55.9%, p =0.067) (Table II). Sex - specific morphological variations . The study also revealed significant sex-related differences in mandibular morphology (Table III). Females exhibited a higher prevalence of diamond-shaped RPC (19.7%) compared to males (7.9%, p =0.033) (Table III). In contrast, males had a greater occurrence of convex-shaped RPC, although this difference did not reach statistical significance (Table III). The analysis of the sigmoid notches revealed that wide RSN and LSN shapes were more common in females (50.8% and 55.7%, respectively) compared to males (38.2% and 41.6%) (Table III). The coronoid processes (RPCO and LPCO) also demonstrated sex-specific variations (Table III). Triangular-shaped RPCO was significantly more common in females (62.3%) than in males (38.2%, p =0.004), suggesting that females tend to have more pronounced muscle attachment points at the coronoid process, potentially influencing mandibular movement and function (Table III). Round-shaped RPCO, on the other hand, was more frequently observed in males (51.7% vs . 34.4%, p =0.037), indicating a potential sex-based difference in the mechanical properties of the mandible (Table III).
This study examined the morphological variations of key mandibular structures—the RPC, LPC, RSN, LSN, RPCO and LPCO—and explored the influence of age and sex on these variations. The findings reveal significant differences across various anatomical features, offering valuable insights into how these structures change with age and differ between males and females. These observations have important clinical implications, particularly in the fields of TMJ disorders, maxillofacial surgery, and orthodontics. Age-related differences. The study found that age significantly influenced the shape of the condylar processes and sigmoid notches. Younger individuals (<46 years) had a higher prevalence of round-shaped RPC and LPC, whereas older individuals showed a higher frequency of flat shapes in both the right and left condylar processes. These findings are consistent with previous research that links degenerative changes in the TMJ to aging, leading to flattening of the condylar heads over time . This may reflect the natural wear and tear associated with aging, which could have clinical implications for the diagnosis and treatment of TMJ disorders in older patients. Similarly, the sigmoid notch exhibited age-related variations, with younger participants more frequently presenting sloping shapes, whereas older individuals had a higher prevalence of wide-shaped sigmoid notches. These changes may be attributed to age-related remodeling of the mandibular notch, which could be influenced by both functional and degenerative factors . The widening of the sigmoid notch in older individuals may also reflect compensatory changes due to alterations in condylar shape and TMJ mechanics . Sex - related differences. Sex also played a significant role in the morphological variations observed in this study. Females had a higher prevalence of diamond-shaped RPC and wide sigmoid notches (RSN and LSN) compared to males. These differences may be attributed to variations in hormonal influences and developmental patterns between males and females, which affect craniofacial growth and mandibular development. The higher frequency of diamond-shaped RPC in females suggests that sex-specific factors may influence the condylar morphology, potentially affecting the biomechanics of the TMJ differently in males and females . Furthermore, the study identified that the triangular shape of the coronoid process (RPCO and LPCO) was significantly more common in females. Since the coronoid process plays a crucial role in muscle attachment and mandibular movement, these findings suggest that sex-specific morphological differences may contribute to differences in TMJ function, which could have implications for the diagnosis and management of TMJ disorders in males and females. Clinical implications. The observed variations in mandibular morphology have important clinical implications. The shape of the condylar processes and coronoid processes is critical for surgical planning in maxillofacial procedures, including TMJ surgeries, mandibular reconstruction, and orthognathic surgery . Understanding the tendency for flat condyles in older individuals can help surgeons anticipate potential challenges in TMJ arthroplasty or repair and may indicate progressive degeneration of the TMH . Moreover, the sex-specific differences in mandibular morphology underscore the importance of considering patient sex in both diagnosis and treatment planning, particularly in orthodontic and reconstructive procedures. Recognizing the different variations in mandibular anatomy based on age and sex can help clinicians avoid misinterpretations that could lead to incorrect diagnoses or unnecessary interventions since panoramic radiographs remain a commonly used diagnostic tool in dental and maxillofacial settings. Limitations and future directions. While this study provides valuable insights into age- and sex-related variations in mandibular morphology, there are several limitations to consider. First, the study relied on a cross-sectional design, which limits the ability to infer longitudinal changes in mandibular morphology. Future studies should consider longitudinal designs to better understand the progressive nature of these variations. Additionally, the sample size, although adequate, could be expanded to include a more diverse population in terms of ethnicity and geographic background, as mandibular morphology may vary across different populations. Future research could also explore the functional implications of these morphological variations, particularly in relation to TMJ mechanics, bite force distribution, and muscle function. Investigating the relationship between morphological variations and clinical outcomes in patients with TMJ disorders could further enhance our understanding of how these anatomical differences contribute to pathology.
This study highlights significant age- and sex-related variations in the morphology of the condylar processes, sigmoid notches, and coronoid processes of the mandible. These findings provide important insights for clinical practice, particularly in the context of diagnosing and treating TMJ disorders, planning maxillofacial surgeries, and understanding the anatomical differences between males and females. Recognizing these variations can enhance the precision of diagnostic imaging and improve outcomes in surgical interventions, ultimately contributing to more personalized and effective patient care.
The Authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
The Authors have no relevant financial or non-financial interests to disclose in relation to this study.
OS, MG, and FD treated the patients and revised the article. FD, SD, and CK researched the scientific literature, and provided statistical findings/analysis. FD wrote the article. All Authors gave final approval for publication.
|
Are women adequately informed about the use of instrumentation during vaginal delivery? A prospective review of the information on instrumental delivery provided to pregnant women and a retrospective review of the quality of consent for instrumental delivery | 8f9c0a19-3146-494f-aadb-aab658955288 | 11897416 | Patient Education as Topic[mh] | In the UK, forceps and ventouse assisted (instrumental) delivery are used in 10%–15% of all deliveries and in 23% of primiparous deliveries . The use of instrumentation is associated with an increased risk of maternal and neonatal injury . Pelvic floor injuries consequent to instrumental vaginal delivery may lead to significant functional problems, such as faecal incontinence, perineal pain, dyspareunia, bladder dysfunction and bowel evacuatory problems. The Green Top guidelines of the Royal College of Obstetricians and Gynaecologists (RCOG) state that ‘women should be informed about assisted vaginal birth in the antenatal period, especially during their first pregnancy. If they indicate specific restrictions or preferences then this should be explored with experienced obstetricians, ideally in advance of labour’ and ‘When midpelvic or rotational birth is indicated, the risks and benefits of assisted vaginal birth should be compared with the risks and benefits of second stage caesarean birth for the given circumstances and skills of the operator’ . This study has two parts. The first was to determine the level of understanding amongst pregnant women about the different stages of labour and the potential use of instrumentation. The second was to determine the quality of consent obtained when instrumentation was used. The study is based on the results presented in Chapters 8 and 10 from the MD thesis ‘The impact of obstetric anal sphincter injuries’ deposited in July 2024 at Imperial College London by Alessandra Orlando, member of the author list for this article.
Ethical approval was obtained from the West Midlands South Birmingham Research Ethics Committee (Iras number 289693) in December 2020. The study was conducted at London North West University Healthcare Trust (LNWH) (Northwick Park Hospital [NPH] and St Mark's Hospital). NPH is a large District General Hospital where around 4200 births take place each year and St Mark's Hospital is a tertiary referral centre for pelvic floor disorders such as anal and faecal incontinence after obstetric anal sphincter injury. Participants were identified from the antenatal care database at NPH. For the first part of the study, women aged between 18 and 55 years old, who were at least 36 weeks pregnant and planning to have a vaginal delivery, were prospectively recruited between 2 July 2021 and 30 June 2022 (Figure ). Once identified, they were contacted via telephone to further assess their capacity, ability to speak and understand English, and eligibility. The participants who agreed to take part in the study were consented and interviewed via telephone. The questionnaire had six questions. Is this your first pregnancy? What do you know about labour? Have the stages of labour been explained to you? If yes, can you explain them to me in your own words? Have you attended antenatal classes or met a midwife or an obstetrician at this stage? Have they mentioned instrumentation? If yes, what have they told you about that? What do you know about it? Answers were analysed in a qualitative and quantitative manner. Descriptive analysis was used for quantitative analysis of the data. Qualitative thematic analysis of the open‐ended questions in the questionnaire was applied following the method described by Braun and Clarke . Themes and patterns were identified with an inductive approach. Data were read and reread before coding and identifying themes, which were then discussed and agreed with all authors. A group of clinical experts (colorectal surgeons with special interest in the pelvic floor, obstetricians with special interest in urogynaecology, pelvic floor physiotherapists) were involved with the study design. The study was initially presented at a general patient forum meeting at LNWH. Following this, the questionnaire was presented to a focus group of women who had previously received instrumentation at childbirth. These women voluntarily took part in the focus group; they were recruited through Mush, a social media platform for expectant and new mothers. The questions were modified according to their feedback in relation to the language used and the number of questions in relation to the sensitive topic treated in the research. If the questioning caused anxiety or curiosity about an instrumental delivery, patients were directed to the local obstetric team for further information. For the other (non‐consecutive) part of the study, the clinical notes were reviewed of patients who had an instrumental vaginal delivery at NPH between 2 December 2020 and 27 January 2021. The consent process was interrogated to ascertain which risks were explained in relation to the use of instrumentation, when the women were consented and what level of analgesia (pain killers) had been given at the time of consent.
Prospective assessment of information provided to pregnant patients A total of 595 women were eligible and were contacted. Of these, 138 agreed to complete the questionnaire. Fifty‐nine participants (43%) were in their first pregnancy (Table ). The framework analysis of the answers to open‐ended questions highlighted different themes. First question ‘What do you know about labour?’ From the online National Health Service (NHS) data dictionary , the definition of labour is ‘Labour is a period of time when there are painful contractions and changes to the cervix that result in the birth of a baby and end with the expulsion of the placenta and membranes.’ Framework analysis of the answers to this question highlighted the themes given in Table . Thirty‐five women (25.36%), all at their first or second pregnancy, gave a more accurate definition of labour, listing the main signs and symptoms (release of mucous plug, water break, starting of contractions and increasing frequency of contractions) as part of information received by midwives and doctors. This information was provided by health professionals to instruct women about when would be the proper time to attend hospital. The remaining 103 participants (74.64%) were not able to give a complete answer about labour (Table ). Second question ‘Have stages of labour been explained you? If yes, can you explain to me in your own words?’ As explained in the NHS website there are three stages of labour : ‘during the first stage of labour, contractions make your cervix gradually open (dilate). This is usually the longest stage of labour. At the start of labour, your cervix starts to soften so it can open. This is called the latent phase and you may feel irregular contractions. It can take many hours, or even days, before you're in established labour. Established labour is when your cervix has dilated to about 4 cm and regular contractions are opening your cervix. Your cervix needs to open about 10 cm for your baby to pass through it. This is what's called being fully dilated. When you reach the end of the first stage of labour, you may feel an urge to push. The second stage of labour lasts from when your cervix is fully dilated until the birth of your baby. The third stage of labour happens after your baby is born, when your womb contracts and the placenta comes out through your vagina.’ When asked about this, 84 (60.9%) participants out of the total of 138 stated that they had not been taught about the stages of labour. Fifty‐four (39.1%) stated that they had. The 54 participants who answered that they knew about stages of labour were asked to explain them in their own words. Six participants (4.3% of the total) were able to give a complete explanation about labour and its stages. Twenty‐four participants (17.3% of the total) gave a partial explanation of stages of labour. They mentioned some of the stages or gave a partially correct answer (e.g., ‘if we are having back pain, cramps, dilation started but not much, early stage. I don't know what happens after: maybe pain increases’) showing that they do not know what stages of labour are. Ten participants (7.2% of the total) did not know how to explain (‘yes, three stages, I don't know how to explain’). Fourteen participants (10.1% of the total) replied with an answer that was off topic (‘There are different kinds of labour, strong, quick, long’) showing that they really did not know what stages of labour are. Third question ‘Have you been offered and attended any antenatal classes?’ Out of a total of 138 participants, 115 participants (83.3%) did not attend antenatal classes because they were not offered any or were unaware of them. Out of the total of 115 participants, eight explained that it was because of the COVID‐19 pandemic, three did not know what an antenatal class was and 20 had attended in the past, during a previous pregnancy. Twenty‐four participants replied that they attended antenatal classes, three of whom through private antenatal classes; 11 did it online through a link provided by the hospital. Fourth question ‘Have you met a midwife and/or an obstetrician during your pregnancy?’ Seventy (50.7%) women were exclusively seen by midwives during their antenatal appointments and 65 women (47.1%) met both midwives and doctors (Figure ). One participant (0.7%) did not know what to answer, two participants did not reply (1.4%). Fifth question ‘Have health professionals mentioned instrumentation (forceps and ventouse)?’ Ninety‐five participants (68.8%) were not informed about instrumentation; 15 (10.8%) did not know what instrumentation was. The answers of the 28 (20.3%) participants who replied ‘yes’ when asked ‘what have health professionals told you about that?’ are represented in Table . Sixth question ‘What do you know about instrumentation?’ This was specifically asked to understand what the common knowledge is about instrumentation and its use, regardless of what the health professionals informed women about. Eighty participants (58%) replied that they did not know anything about instrumentation. The remaining 58 participants (42%) gave an answer about their general knowledge about instrumentation. Framework analysis of the answers identified six main topics (Table ). Twenty‐nine participants (21%) were aware that instruments are used to help the baby to come out. Only three participants (2.2%) were aware of a potential risk of tearing in relation to use of instrumentation. The second part: retrospective assessment of quality of consent for instrumentation Of the 74 instrumental vaginal deliveries (either forceps or ventouse assisted) that took place at LNWH in the selected time frame, 59 case notes were available and were reviewed. In 18.6% ( n = 11) of the case notes there was no record of informed consent. In 61% ( n = 36) documentation of verbal consent was made in the notes and in 20.3% ( n = 12) a separate signed consent form was part of the notes (Table ). When consent was documented, this was obtained during the second stage of labour when opioid analgesia had already been administered. Analgesia provided during labour by the time the consent was taken was an epidural infusion of bupivacaine 0.1% w/v and fentanyl 2 μg/mL, pethidine 100 mg intramuscularly and nitrous oxide. At the time of consent, the recorded amount of analgesia given was as follows: 57% ( n = 34) of participants received an epidural, on average four boluses +/− 2.4 SD (maximum 11 boluses, minimum 1 bolus), 18% ( n = 11) received pethidine 100 mg intramuscularly at least once during labour and 45.7% ( n = 27) had nitrous oxide.
A total of 595 women were eligible and were contacted. Of these, 138 agreed to complete the questionnaire. Fifty‐nine participants (43%) were in their first pregnancy (Table ). The framework analysis of the answers to open‐ended questions highlighted different themes.
From the online National Health Service (NHS) data dictionary , the definition of labour is ‘Labour is a period of time when there are painful contractions and changes to the cervix that result in the birth of a baby and end with the expulsion of the placenta and membranes.’ Framework analysis of the answers to this question highlighted the themes given in Table . Thirty‐five women (25.36%), all at their first or second pregnancy, gave a more accurate definition of labour, listing the main signs and symptoms (release of mucous plug, water break, starting of contractions and increasing frequency of contractions) as part of information received by midwives and doctors. This information was provided by health professionals to instruct women about when would be the proper time to attend hospital. The remaining 103 participants (74.64%) were not able to give a complete answer about labour (Table ).
As explained in the NHS website there are three stages of labour : ‘during the first stage of labour, contractions make your cervix gradually open (dilate). This is usually the longest stage of labour. At the start of labour, your cervix starts to soften so it can open. This is called the latent phase and you may feel irregular contractions. It can take many hours, or even days, before you're in established labour. Established labour is when your cervix has dilated to about 4 cm and regular contractions are opening your cervix. Your cervix needs to open about 10 cm for your baby to pass through it. This is what's called being fully dilated. When you reach the end of the first stage of labour, you may feel an urge to push. The second stage of labour lasts from when your cervix is fully dilated until the birth of your baby. The third stage of labour happens after your baby is born, when your womb contracts and the placenta comes out through your vagina.’ When asked about this, 84 (60.9%) participants out of the total of 138 stated that they had not been taught about the stages of labour. Fifty‐four (39.1%) stated that they had. The 54 participants who answered that they knew about stages of labour were asked to explain them in their own words. Six participants (4.3% of the total) were able to give a complete explanation about labour and its stages. Twenty‐four participants (17.3% of the total) gave a partial explanation of stages of labour. They mentioned some of the stages or gave a partially correct answer (e.g., ‘if we are having back pain, cramps, dilation started but not much, early stage. I don't know what happens after: maybe pain increases’) showing that they do not know what stages of labour are. Ten participants (7.2% of the total) did not know how to explain (‘yes, three stages, I don't know how to explain’). Fourteen participants (10.1% of the total) replied with an answer that was off topic (‘There are different kinds of labour, strong, quick, long’) showing that they really did not know what stages of labour are.
Out of a total of 138 participants, 115 participants (83.3%) did not attend antenatal classes because they were not offered any or were unaware of them. Out of the total of 115 participants, eight explained that it was because of the COVID‐19 pandemic, three did not know what an antenatal class was and 20 had attended in the past, during a previous pregnancy. Twenty‐four participants replied that they attended antenatal classes, three of whom through private antenatal classes; 11 did it online through a link provided by the hospital.
Seventy (50.7%) women were exclusively seen by midwives during their antenatal appointments and 65 women (47.1%) met both midwives and doctors (Figure ). One participant (0.7%) did not know what to answer, two participants did not reply (1.4%).
Ninety‐five participants (68.8%) were not informed about instrumentation; 15 (10.8%) did not know what instrumentation was. The answers of the 28 (20.3%) participants who replied ‘yes’ when asked ‘what have health professionals told you about that?’ are represented in Table .
This was specifically asked to understand what the common knowledge is about instrumentation and its use, regardless of what the health professionals informed women about. Eighty participants (58%) replied that they did not know anything about instrumentation. The remaining 58 participants (42%) gave an answer about their general knowledge about instrumentation. Framework analysis of the answers identified six main topics (Table ). Twenty‐nine participants (21%) were aware that instruments are used to help the baby to come out. Only three participants (2.2%) were aware of a potential risk of tearing in relation to use of instrumentation.
Of the 74 instrumental vaginal deliveries (either forceps or ventouse assisted) that took place at LNWH in the selected time frame, 59 case notes were available and were reviewed. In 18.6% ( n = 11) of the case notes there was no record of informed consent. In 61% ( n = 36) documentation of verbal consent was made in the notes and in 20.3% ( n = 12) a separate signed consent form was part of the notes (Table ). When consent was documented, this was obtained during the second stage of labour when opioid analgesia had already been administered. Analgesia provided during labour by the time the consent was taken was an epidural infusion of bupivacaine 0.1% w/v and fentanyl 2 μg/mL, pethidine 100 mg intramuscularly and nitrous oxide. At the time of consent, the recorded amount of analgesia given was as follows: 57% ( n = 34) of participants received an epidural, on average four boluses +/− 2.4 SD (maximum 11 boluses, minimum 1 bolus), 18% ( n = 11) received pethidine 100 mg intramuscularly at least once during labour and 45.7% ( n = 27) had nitrous oxide.
At LNWH approximately 350 deliveries take place every month; around 50 of these are instrumental deliveries, that is, around one in seven or 14%. The Green Top guidelines NO26 of the Royal College of Obstetricians and Gynaecologists state that ‘Women should be informed about assisted vaginal birth in the antenatal period, especially during their first pregnancy. If they indicate specific restrictions or preferences then this should be explored with an experienced obstetrician, ideally in advance of labour.’ Our study suggests that very limited information is provided to expectant mothers about the possible use of instrumentation or about its possible consequences. It is known that instrumentation increases the risk of a perineal tear. Furthermore, the risk of an obstetric anal sphincter injury in the UK is thought to be around 6% in primipara . Instrumentation is a well recognized risk factor for these injuries . The functional problems associated with an obstetric anal sphincter injury are well documented. These include anal incontinence, chronic pain, dyspareunia and evacuatory disorders. Such injuries are also associated with significant psychological morbidity and will have a negative impact on the quality of life of those affected. It is also worth noting that obstetric forceps, known to be more damaging than ventouse deliveries, are not, or are very rarely, used in many other countries such as the USA and some countries in Europe. Only 25% of the participants appeared to be able to describe the stages of labour. Those who replied they knew about stages of labour gave a partially correct answer or went off topic. Women at their first pregnancy appeared keener to get information from health professionals about what to expect for labour in comparison to women who had a previous delivery. Women who had two or more deliveries gave answers mainly based on their previous experience and not in line with formal information from health professionals. Most participants did not attend antenatal classes because they were not offered any; some explained that this was related to the COVID‐19 outbreak. Some were offered some online classes. Most of the women were not informed by the health professionals about the existence of instrumentation. The knowledge about its usage and risks is mainly related to personal experience, friends' experience or from self‐directed research on the internet. Stohl , Trandel‐Korenchuk and Moore explained how important it is to explain to pregnant women extensively before childbirth the risks and complications related to use of instrumentation at childbirth. Similar results were found by Koster et al. , who looked at the perception of childbirth from women who had a traumatic childbirth. They found that women experienced a lack of information and consent from the clinicians at childbirth. The first part of this study appears to demonstrate significant deficiencies in the awareness of the stages of labour, the risks of vaginal delivery and the use of instrumentation. When 14% of deliveries require instrumentation (the damaging consequences of such instrumentation are well recognized) and up to 6% may suffer an obstetric anal sphincter injury, it is difficult to believe that pregnant women are not better informed. It would be well outside acceptable practice to proceed with, for example, a right hemicolectomy without mentioning the risk of anastomotic leak (5%) or bile leak after cholecystectomy (1%). Yet, in this study, a failure to communicate the risk of perineal injury and instrumentation seemed to be common practice. For the second part of this study, we found that there was no evidence of consent in 18.6% of the reviewed clinical notes. The nature of the emergency of the procedure still does not justify the absence of consent. ‘In the emergency situation, verbal consent should be obtained which should be witnessed by another care professional. Obstetricians and the witness to verbal consent must record the decision and the reason for proceeding to any emergency delivery without written consent’ . Sturgeon et al. noted that ‘NHS National Services Scotland (2018–2019) reported 123 medical negligence claims closed between January 2015 and December 2019 where the reason for claim was identified as “failure to obtain informed consent”.’ It is well recognized that informed consent is not simply a signature on a consent form. A signed consent form merely documents that the ‘process’ of informed consent has taken place . Gerancher et al. surveyed patients who were consented for epidural during labour and the use of a written consent associated with verbal consent gave the highest recall to participants about the consent process months after the childbirth. Only 20% of patients in our study had signed a consent form. Wada et al.'s systematic review of studies focused on women's decision‐making about epidural analgesia for pain management in labour. They suggest that empirical evidence to date is insufficient to determine whether women undergoing labour have got full capacity when consenting to epidural analgesia. Given this uncertainty, sufficient information about pain management should be provided as part of antenatal education and consent should be taken for this prior to onset of labour. More prospective and retrospective studies are required on this topic. One may rightly question the validity of consent for instrumentation taken during the second stage of labour. First, the patient may be under the effect of opiates, high doses of local anaesthetic or nitrous oxide, all with associated cognitive effects. Second, consent will often have been taken during a time of urgency, when maternal exhaustion and adverse fetal signs have become apparent. This may appear to add an element of coercion. All this would seem to be a long way from the NHS definition of informed consent ‘voluntary, informed, and with capacity’ (Consent to treatment—NHS, https://www.nhs.uk/conditions/consent‐to‐treatment/ ). Furthermore, the justification that the use of instrumentation is an emergency life‐saving measure and that suboptimal consent may be tolerated should not be accepted. This is not a rare or unforeseen intervention. Remembering that one in seven patients require instrumentation, it is very much a foreseen intervention. This study was limited by the fact that only a single institution was used and that the time frame for data collection was limited. The restrictions imposed due to the COVID‐19 pandemic are also likely to have had an impact on the ability to inform pregnant women about delivery, and this may have impacted the results reported in this study. A multi‐centre study conducted along similar lines, over a longer period of time, may allow a better evaluation of the consent process in the antenatal and intrapartum periods across the UK.
Further studies are needed to determine deficiencies in patient information about childbirth and consent for operative delivery.
Alessandra Orlando: Conceptualization; investigation; writing – original draft; methodology; validation; visualization; writing – review and editing; software; formal analysis; project administration; data curation; resources. Gregory P. Thomas: Conceptualization; supervision; validation; writing – review and editing; visualization; project administration; resources. Ruwan Fernando: Conceptualization; supervision; project administration; writing – review and editing; validation; resources; visualization. Jamie Murphy: Conceptualization; methodology; validation; visualization; supervision; resources. Nada Elsaid: Project administration; data curation. Stella Dilke: Data curation; project administration. Carolynne J. Vaizey: Conceptualization; visualization; supervision; project administration; writing – review and editing; validation; resources.
There has not been any funding for the studies.
No conflict of interest identified.
Ethical approval was obtained from the West Midlands South Birmingham Research Ethics Committee (Iras number 289693) in December 2020.
|
Layer-by-layer coated hybrid nanoparticles with pH-sensitivity for drug delivery to treat acute lung infection | 70135f34-f0eb-41ff-a99e-07e631118119 | 8592614 | Pharmacology[mh] | Introduction Bacteria-induced infectious diseases are a severe burden to human health in the world (Prestinaci et al., ; van der Poll et al., ). The discovery of antibiotics is supposed to be a therapeutic strategy in the treatment of these infectious diseases in the past several decades (Arias and Murray, ; Ling et al., ), but these small molecule drugs are easily and predominately eliminated by the reticuloendothelial system (RES) like kidney, resulting in poor antibacterial efficacy and low bioavailability (Roberts, ; Kuno et al., ). Furthermore, the off-targeting of the antibiotics may cause severe side-effects, particularly attacking the healthy tissue and organ, and leading to susceptibility to infections (Dinarello, ; Wade and Williams, ). In addition, drug-resistance of ‘superbugs’ has been emerged and raised as one of the biggest threats to the antimicrobial battleground, resulting from the misuse/overuse of antibiotics in humans and animals (Holmes et al., ; Thanner et al., ; Ayukekbong et al., ; Blaskovich, ; Tacconelli et al., ). These drug-resistant bacteria can survive with the common free antibiotic treatment, resulting in longer hospital stays with high costs and high mortality rates (Aslam et al., ). For instance, spectinomycin/methicillin-resistant Staphylococcus aureus is able to cause severe infections on the skin and some soft tissues in the body. And this superbug is also associated with some acute infectious diseases such as nosocomial pneumonia. As reported, drug-resistant bacteria cause more than 20,000 deaths per year in the United States (Klein et al., ; Klevens et al., ; Mendy et al., ; Kavanagh, ). Therefore, it’s of great importance and urgency to develop effective therapeutics and treatment strategies to eliminate the invading bacteria for anti-infection, especially ALI treatment. With the rapid development of nano-bio-technology, multi-functional biomaterials-based drug delivery systems have attracted more and more attention for improving therapeutic efficacy (Radovic-Moreno et al., ; Gupta et al., ; Yang et al., ). For instance, Wang and coworkers developed an infectious microenvironment-sensitive nanoparticle for simultaneous delivery of ciprofloxacin and TPCA-1 (anti-inflammatory drug) to manage the bacteria-caused ALI and sepsis in mice (Zhang et al., ). Ruoslahti and colleagues reported the biocompatible nanoparticles which could target the invading S. aureus in the body, showing high antimicrobial activity and low systemic side-effects to treat the skin and lung infection (Hussain et al., ). In this study, inspired by the acidic microenvironment at the infection site (Zhang et al., ; Ma et al., ; Chen et al., ), we developed a hybrid nanoparticle based on liposome and polymer through an extrusion and LbL processes for delivery of antibiotics to treat the acute lung infection . Here, spectinomycin is selected as a model antibiotic (Lee et al., ), and was encapsulated into the liposomal core through the pH-gradient method. The liposomes were prepared based on 1, 2-distearoyl- sn -glycero-3-phospho-(1′- rac -glycerol) (DSPG) and hydrophobic cholesterol (Chol) with good biocompatibility. The polycationic polymer poly( β -amino ester) (PBAE), widely used as drug delivery carriers with pH sensitivity, was used as a functional layer for pH-triggered drug release performance (Zhang et al., ; Kaczmarek et al., ; Huang et al., ; Li et al., ; Men et al., ). The polyanionic sodium alginate (NaAIg) layer is successively deposited on the surface of NPs via the LbL process (Jain and Bar-Shalom, ; Ilgin et al., ), resulting in Spe-loaded liposome-polymer hybrid NPs (Spe@HNPs). The physicochemical properties of Spe@HNPs, including hydrodynamic diameter, surface charge, drug loading content, and release performance are thoroughly investigated. The antibacterial efficacy and cytotoxicity in vitro are assessed. The therapeutic efficacy against acute lung infection in vivo is evaluated. This designed Spe@HNPs might be a promising nanomedicine for anti-infection.
Materials and methods 2.1. Materials Lipid 1, 2-distearoyl- sn -glycero-3-phospho-(1′- rac -glycerol) (DSPG) and hydrophobic cholesterol (Chol) were purchased from Avanti Polar Lipids. Spectinomycin (≥95%), sodium alginate (NaAIg), thiazolyl blue tetrazolium bromide (MTT, 98%) were purchased from Sigma Aldrich. Chloroform, dimethyl sulfoxide (DMSO), and other organic solvents were analytical grade and pursued from Sigma Aldrich. S. aureus ( S. aureus , ATCC 29213) and S. aureus (MRSA BAA40) strains, standard NIH 3T3 mouse fibroblast cells, and mediums were purchased from InVivos. 2.2. Preparation of Spe-loaded liposome Spe-loaded liposomes were successfully prepared according to the previous reports (Deng et al., ; Freag et al., ; Mensah et al., ; Men et al., ). Briefly, DSPG and Chol at a mass ratio of 3: 1 were dissolved in a mixed solvent (chloroform: methanol: water= 60: 32: 8, v/v) in a round-bottom flask. A thin lipid film was prepared using rotary evaporation. The temperature was 40 °C and the press was 150 mbar. After the solvent was completely removal, the resulted film was mixed with citric acid buffer (pH 4.0) at 65 °C with sonication for 90 min, followed by filtering with 100 nm PES syringe filter. Then, the sodium carbonate buffer was dropped to increase the pH of the liposomal suspension to about 6.8. The model drug Spe at different feed ratios was then added to load through the pH gradient method. Then, the solution was purified using centrifugal filtration for three-time. The Spe-loaded liposomes were obtained and stored for the study. 2.3. Preparation of spe@HNPs The Spe@HNPs were prepared according to the references (Deshmukh et al., ; Morton et al., ; Men et al., ). In brief, 5 mg of PBAE was added into 2 mg of Spe-loaded liposomes solution (2 mL), resulting in the mixed solution which was incubated at room temperature with sonication for about 5 s. After centrifugation at 2000 g for 30 min, the PBAE-coated Spe-loaded NPs were obtained. Similarly, 5 mg of NaAIg was added into the solution, and NaAIg layer was successfully coated on the surface, resulting in the Spe@HNPs. The particle size, polydispersity index (PDI), and surface charge of samples after each step were measured to validate the successful each layer deposition. 2.4. Characterization The particle size and surface charge of Spe-loaded liposomes, Spe-loaded NPs coated with PBAE layer and bilayered Spe@HNPs were measured using dynamic light scattering (DLS, Malvern Zetasizer Nano S, Malvern, UK). The sample was re-suspended in PBS, and measured in a quartz cuvette (1.0 mL) at room temperature. In order to evaluate the stability of the system, the particle size and PDI of samples were recorded after incubation in the serum solution. 1 mg of Spe@HNPs solution in PBS (1 mL) containing 20% fetal bovine serum (FBS) at pH 7.4 was prepared first. Then, the hybrid nanoparticle solution was kept in an incubator at 37 °C with 110 rpm for five days. At predetermined time intervals, the particle size and PDI of the sample were recorded using DLS. Moreover, to further confirm the stability of the system, 2 mg of Spe@HNPs were re-suspended in PBS (1 mL, pH 7.4) or 5% glucose solution. Then, the original solution was diluted at 1/10, 1/100, and 1/1000 to prepare the samples for DLS measurement. In order to confirm the pH sensitivity of Spe@HNPs, 1 mg of sample in PBS (1 mL) at different pH conditions were prepared firstly. After incubation with 110 rpm at 37 °C, the hydrodynamic diameter, polydispersity index, and surface charge of solution were monitored using DSL as aforementioned. The morphology of the sample was determined by field-emission scanning electron microscopy (FE-SEM, JEOL JSM 6701 F). The Spe@HNPs in PBS were centrifuged at 7000 rpm for 10 mins with a pellet which was resuspended in deionized water. 2.5 µL of sample suspension was dropped on the copper foil and dried overnight. Then, the Spe@HNPs were coated with platinum with current 10 A for 120 s. The images were then taken under FE-SEM at an acceleration voltage of 5 kV 2.5. Drug loading capacity To evaluate the drug loading efficacy of HNPs, the drug loading content (LC) and encapsulated efficiency (EE) of Spe@HNPs were confirmed by high-performance liquid chromatography (HPLC). In brief, Spe@HNPs (0.5 mL, 2 mg/mL) was added into DMSO (10 mL) with stirring at room temperature for 1 h. The sample was measured using HPLC, and the amount of Spe was calculated based on the standard curve. The LC of Spe@HNPs was the weight ratio of Spe loaded in the HNPs to the total Spe@HNPs sample. The EE was the weight ratio of Spe loaded in the HNPs to the total Spe drug when preparing the Spe@HNPs. 2.6. Spe release profiles from spe@HNPs The Spe release performance in vitro from Spe@HNPs was studied using the dialysis method ( Zhang et al., ; Men et al., ). Briefly, Spe@HNPs (2 mL, 2 mg/mL) was re-suspended into PBS (4 mL) at pH of 7.4 or 6.0, followed by transferring the solution into a cellulose dialysis bag (molecular weight cutoff, MWCO 3500-4000). Then, the dialysis bag was immersed into the according to PBS (44 mL, pH 7.4 or 6.0) in a beaker with stirring 110 rpm at 37 °C. At the predetermined time, the sample solution (2 mL) was taken outside of the dialysis bag for HPLC measurement, and the same volume of fresh PBS (pH 7.4 or 6.0) was added into the solution. The percentage of accumulative drug release ( E r ) from Spe@HNPs was calculated according to the equation: E r = V e ∑ 1 n − 1 C i + V 0 C n M S p e × 100 % where, m Spe is the mass weight of Spe loaded into the HNPs, V e is the volume of PBS in the dialysis bag (6 mL), V 0 is the total volume of PBS (50 mL), and C i is the concentration of release Spe for the i th sample. 2.7. Cell culture NIH 3T3 cells were cultured in the prepared Dulbecco's modified eagle medium (DMEM) which were added with 10% FBS, 100 units/mL penicillin, and 100 μg/mL streptomycin. NIH 3T3 cells were saved at 37 °C in an incubator with 5% CO 2 . 2.8. Cytotoxicity assay MTT assay was utilized to study the toxic effect of free Spe, blank HNPs, and Spe@HNPs against NIH 3T3 cells. In brief, the NIH 3T3 cells were cultured in the prepared DMEM, and were collected at the logarithmic growth phase. The cells were seeded into 96-well plates at a concentration of 5000 cells/well in 200 μL. After incubation at 37 °C overnight, the medium was removed. 200 μL of samples (free Spe, HNPs, and Spe@HNPs) at different conditions in DMEM was added into every well. The fresh medium was used as a control and added into the well similarly. The 96-well plates containing samples and cells were incubated in the incubator for 24 h. Then, the solution was removed, followed by adding MTT solution (200 μL/well, 1 mg/mL). After that, the 96-well plates were shaken at 150-200 rpm for 5-10 min. The plates were further incubated for another 4 h, and the medium was discarded. Then, DMSO (200 μL) was added with stirring at 150 rpm for 10-15 min. Finally, the plates were read with a microplate reader at 570 nm. The cell viability was calculated according to the equation: C e l l v i a b i l i t y ( 100 % ) = A s a m p l e − A b l a n k A c o n t r o l − A b l a n k × 100 % where, A control and A sample are the absorbances at 570 nm with or without the sample treatment. A blank was the absorbance at 570 nm only with the medium. 2.9. In vitro antimicrobial efficacy against bacteria Free Spe and Spe@HNPs were diluted in 5 mM HEPES at a concentration of 2 mg/mL. Compounds were dispensed in the first wells of a flat bottom 96-wells microtiter plate and serially two-fold diluted with Mueller Hinton Broth (MHB) into successive wells in a final volume of 100 μL. Bacteria were cultured at 37 °C in MHB in a mid-log phase. 100 μL of diluted bacteria were dispensed in each well of the plates. 100 μL of bacteria were inoculated in MHB only and used as bacterial growth-controls. Plates were sealed with parafilm and incubated at 37 °C for 18 h. The minimal inhibitory concentration (MIC) was measured using the absorbance at 600 nm with a microplate reader (Liao et al., ; Si et al., ). 2.10. Mice Adult CD-1 (18-20 g) mice were saved in polyethylene cages with stainless steel lids at 20 °C-22 °C with a 12 h light/dark cycle. The cages were covered with a filter cap. These mice were fed with food and water ad-lib . The China Medical University Institutional Animal Care and Use Committee approved all animal care and experimental protocols used in the studies. 2.11. Antimicrobial efficacy in vivo The mice were anesthetized using intraperitoneal ( i.p. ) injection of ketamine (120 mg/kg) and xylazine (6 mg/kg) mixture in saline. Then, the mice were placed in a supine position head up on aboard. Afterward, the trachea of a mouse was exposed, and 10 6 CFU of MRSA BAA40 per mouse was intratracheally administrated. The mouse was held upright for 1 min after administration. 4 hours later, the mice were grouped randomly and intravenously injected with PBS, HNPs (4 mg/kg), free Spe (4 mg/kg), Spe@HNPs (equal to 4 mg/kg of free Spe), respectively. At 24 h, the mice were anesthetized, and the trachea was cannulated. A needle was inserted into the cannulated trachea, and PBS (1.5 mL-3 mL) was infused and withdrawn to collect the lung bronchoalveolar lavage fluid (BALF) which was stored for future analysis. 2.12. Measurement BALF was centrifuged at 350 g for 5 min, and the supernatant was collected. The CFU in BALF was measured using LB plates. The certain volumes of the supernatant were added to the plates and incubated at 37 °C for 16-20 hours. Then, the CFU numbers were counted. The concentrations of inflammatory factors TNF- α , IL-6, and IL-1 β in the supernatant were determined with ELISA MAX Deluxe Sets (Biolegend, San Diego, CA). The protein contents in the supernatant of BALF were determined with the BCA method using a commercial kit (Thermo Scientific, Rockford, IL). The pellet was collected and counted to record the leukocytes number. 2.13 H&E Staining The lungs were harvested after different treatments (PBS, HNPs, free Spe, and Spe@HNPs). Then, the lungs were fixed with 10% formalin, embedded in paraffin and sectioned at 5 μm, followed by staining with hematoxylin and eosin. The prepared slices were imaged by fluorescence confocal microscopy (ZEISS, Observer. Z1, USA). 2.14. Statistical analysis The experimental data were presented with an average value, expressed as mean ± standard deviation (s.d.). Statistical analysis was conducted using one-way ANOVA or Student's t -test of Origin 8.5.
Materials Lipid 1, 2-distearoyl- sn -glycero-3-phospho-(1′- rac -glycerol) (DSPG) and hydrophobic cholesterol (Chol) were purchased from Avanti Polar Lipids. Spectinomycin (≥95%), sodium alginate (NaAIg), thiazolyl blue tetrazolium bromide (MTT, 98%) were purchased from Sigma Aldrich. Chloroform, dimethyl sulfoxide (DMSO), and other organic solvents were analytical grade and pursued from Sigma Aldrich. S. aureus ( S. aureus , ATCC 29213) and S. aureus (MRSA BAA40) strains, standard NIH 3T3 mouse fibroblast cells, and mediums were purchased from InVivos.
Preparation of Spe-loaded liposome Spe-loaded liposomes were successfully prepared according to the previous reports (Deng et al., ; Freag et al., ; Mensah et al., ; Men et al., ). Briefly, DSPG and Chol at a mass ratio of 3: 1 were dissolved in a mixed solvent (chloroform: methanol: water= 60: 32: 8, v/v) in a round-bottom flask. A thin lipid film was prepared using rotary evaporation. The temperature was 40 °C and the press was 150 mbar. After the solvent was completely removal, the resulted film was mixed with citric acid buffer (pH 4.0) at 65 °C with sonication for 90 min, followed by filtering with 100 nm PES syringe filter. Then, the sodium carbonate buffer was dropped to increase the pH of the liposomal suspension to about 6.8. The model drug Spe at different feed ratios was then added to load through the pH gradient method. Then, the solution was purified using centrifugal filtration for three-time. The Spe-loaded liposomes were obtained and stored for the study.
Preparation of spe@HNPs The Spe@HNPs were prepared according to the references (Deshmukh et al., ; Morton et al., ; Men et al., ). In brief, 5 mg of PBAE was added into 2 mg of Spe-loaded liposomes solution (2 mL), resulting in the mixed solution which was incubated at room temperature with sonication for about 5 s. After centrifugation at 2000 g for 30 min, the PBAE-coated Spe-loaded NPs were obtained. Similarly, 5 mg of NaAIg was added into the solution, and NaAIg layer was successfully coated on the surface, resulting in the Spe@HNPs. The particle size, polydispersity index (PDI), and surface charge of samples after each step were measured to validate the successful each layer deposition.
Characterization The particle size and surface charge of Spe-loaded liposomes, Spe-loaded NPs coated with PBAE layer and bilayered Spe@HNPs were measured using dynamic light scattering (DLS, Malvern Zetasizer Nano S, Malvern, UK). The sample was re-suspended in PBS, and measured in a quartz cuvette (1.0 mL) at room temperature. In order to evaluate the stability of the system, the particle size and PDI of samples were recorded after incubation in the serum solution. 1 mg of Spe@HNPs solution in PBS (1 mL) containing 20% fetal bovine serum (FBS) at pH 7.4 was prepared first. Then, the hybrid nanoparticle solution was kept in an incubator at 37 °C with 110 rpm for five days. At predetermined time intervals, the particle size and PDI of the sample were recorded using DLS. Moreover, to further confirm the stability of the system, 2 mg of Spe@HNPs were re-suspended in PBS (1 mL, pH 7.4) or 5% glucose solution. Then, the original solution was diluted at 1/10, 1/100, and 1/1000 to prepare the samples for DLS measurement. In order to confirm the pH sensitivity of Spe@HNPs, 1 mg of sample in PBS (1 mL) at different pH conditions were prepared firstly. After incubation with 110 rpm at 37 °C, the hydrodynamic diameter, polydispersity index, and surface charge of solution were monitored using DSL as aforementioned. The morphology of the sample was determined by field-emission scanning electron microscopy (FE-SEM, JEOL JSM 6701 F). The Spe@HNPs in PBS were centrifuged at 7000 rpm for 10 mins with a pellet which was resuspended in deionized water. 2.5 µL of sample suspension was dropped on the copper foil and dried overnight. Then, the Spe@HNPs were coated with platinum with current 10 A for 120 s. The images were then taken under FE-SEM at an acceleration voltage of 5 kV
Drug loading capacity To evaluate the drug loading efficacy of HNPs, the drug loading content (LC) and encapsulated efficiency (EE) of Spe@HNPs were confirmed by high-performance liquid chromatography (HPLC). In brief, Spe@HNPs (0.5 mL, 2 mg/mL) was added into DMSO (10 mL) with stirring at room temperature for 1 h. The sample was measured using HPLC, and the amount of Spe was calculated based on the standard curve. The LC of Spe@HNPs was the weight ratio of Spe loaded in the HNPs to the total Spe@HNPs sample. The EE was the weight ratio of Spe loaded in the HNPs to the total Spe drug when preparing the Spe@HNPs.
Spe release profiles from spe@HNPs The Spe release performance in vitro from Spe@HNPs was studied using the dialysis method ( Zhang et al., ; Men et al., ). Briefly, Spe@HNPs (2 mL, 2 mg/mL) was re-suspended into PBS (4 mL) at pH of 7.4 or 6.0, followed by transferring the solution into a cellulose dialysis bag (molecular weight cutoff, MWCO 3500-4000). Then, the dialysis bag was immersed into the according to PBS (44 mL, pH 7.4 or 6.0) in a beaker with stirring 110 rpm at 37 °C. At the predetermined time, the sample solution (2 mL) was taken outside of the dialysis bag for HPLC measurement, and the same volume of fresh PBS (pH 7.4 or 6.0) was added into the solution. The percentage of accumulative drug release ( E r ) from Spe@HNPs was calculated according to the equation: E r = V e ∑ 1 n − 1 C i + V 0 C n M S p e × 100 % where, m Spe is the mass weight of Spe loaded into the HNPs, V e is the volume of PBS in the dialysis bag (6 mL), V 0 is the total volume of PBS (50 mL), and C i is the concentration of release Spe for the i th sample.
Cell culture NIH 3T3 cells were cultured in the prepared Dulbecco's modified eagle medium (DMEM) which were added with 10% FBS, 100 units/mL penicillin, and 100 μg/mL streptomycin. NIH 3T3 cells were saved at 37 °C in an incubator with 5% CO 2 .
Cytotoxicity assay MTT assay was utilized to study the toxic effect of free Spe, blank HNPs, and Spe@HNPs against NIH 3T3 cells. In brief, the NIH 3T3 cells were cultured in the prepared DMEM, and were collected at the logarithmic growth phase. The cells were seeded into 96-well plates at a concentration of 5000 cells/well in 200 μL. After incubation at 37 °C overnight, the medium was removed. 200 μL of samples (free Spe, HNPs, and Spe@HNPs) at different conditions in DMEM was added into every well. The fresh medium was used as a control and added into the well similarly. The 96-well plates containing samples and cells were incubated in the incubator for 24 h. Then, the solution was removed, followed by adding MTT solution (200 μL/well, 1 mg/mL). After that, the 96-well plates were shaken at 150-200 rpm for 5-10 min. The plates were further incubated for another 4 h, and the medium was discarded. Then, DMSO (200 μL) was added with stirring at 150 rpm for 10-15 min. Finally, the plates were read with a microplate reader at 570 nm. The cell viability was calculated according to the equation: C e l l v i a b i l i t y ( 100 % ) = A s a m p l e − A b l a n k A c o n t r o l − A b l a n k × 100 % where, A control and A sample are the absorbances at 570 nm with or without the sample treatment. A blank was the absorbance at 570 nm only with the medium.
In vitro antimicrobial efficacy against bacteria Free Spe and Spe@HNPs were diluted in 5 mM HEPES at a concentration of 2 mg/mL. Compounds were dispensed in the first wells of a flat bottom 96-wells microtiter plate and serially two-fold diluted with Mueller Hinton Broth (MHB) into successive wells in a final volume of 100 μL. Bacteria were cultured at 37 °C in MHB in a mid-log phase. 100 μL of diluted bacteria were dispensed in each well of the plates. 100 μL of bacteria were inoculated in MHB only and used as bacterial growth-controls. Plates were sealed with parafilm and incubated at 37 °C for 18 h. The minimal inhibitory concentration (MIC) was measured using the absorbance at 600 nm with a microplate reader (Liao et al., ; Si et al., ).
Mice Adult CD-1 (18-20 g) mice were saved in polyethylene cages with stainless steel lids at 20 °C-22 °C with a 12 h light/dark cycle. The cages were covered with a filter cap. These mice were fed with food and water ad-lib . The China Medical University Institutional Animal Care and Use Committee approved all animal care and experimental protocols used in the studies.
Antimicrobial efficacy in vivo The mice were anesthetized using intraperitoneal ( i.p. ) injection of ketamine (120 mg/kg) and xylazine (6 mg/kg) mixture in saline. Then, the mice were placed in a supine position head up on aboard. Afterward, the trachea of a mouse was exposed, and 10 6 CFU of MRSA BAA40 per mouse was intratracheally administrated. The mouse was held upright for 1 min after administration. 4 hours later, the mice were grouped randomly and intravenously injected with PBS, HNPs (4 mg/kg), free Spe (4 mg/kg), Spe@HNPs (equal to 4 mg/kg of free Spe), respectively. At 24 h, the mice were anesthetized, and the trachea was cannulated. A needle was inserted into the cannulated trachea, and PBS (1.5 mL-3 mL) was infused and withdrawn to collect the lung bronchoalveolar lavage fluid (BALF) which was stored for future analysis.
Measurement BALF was centrifuged at 350 g for 5 min, and the supernatant was collected. The CFU in BALF was measured using LB plates. The certain volumes of the supernatant were added to the plates and incubated at 37 °C for 16-20 hours. Then, the CFU numbers were counted. The concentrations of inflammatory factors TNF- α , IL-6, and IL-1 β in the supernatant were determined with ELISA MAX Deluxe Sets (Biolegend, San Diego, CA). The protein contents in the supernatant of BALF were determined with the BCA method using a commercial kit (Thermo Scientific, Rockford, IL). The pellet was collected and counted to record the leukocytes number.
H&E Staining The lungs were harvested after different treatments (PBS, HNPs, free Spe, and Spe@HNPs). Then, the lungs were fixed with 10% formalin, embedded in paraffin and sectioned at 5 μm, followed by staining with hematoxylin and eosin. The prepared slices were imaged by fluorescence confocal microscopy (ZEISS, Observer. Z1, USA).
Statistical analysis The experimental data were presented with an average value, expressed as mean ± standard deviation (s.d.). Statistical analysis was conducted using one-way ANOVA or Student's t -test of Origin 8.5.
Results and discussion 3.1. Preparation and characterization of Spe@HNPs The bilayered drug-loaded liposome-polymer hybrid nanoparticles (Spe@HNPs) were prepared using the film hydration method, extrusion, and layer-by-layer processes. Firstly, the liposomes were prepared based on DSPG and Chol at a mass ratio of 3:1, followed by loading the drug via the pH-gradient method. The pH in the liposomal core was about 4.0, while the pH outside was adjusted to about 6.5. The solubility of the drug was significantly increased at pH 4.0 compared with that at pH 6.5 due to the protonation of amine residues in the spectinomycin. Then, the PBAE layer and NaAIg layers were successively deposited on the surface of Spe-loaded liposomes via the LbL process through the polyelectronic interaction. This process was recorded by measurement of hydrodynamic diameter, PDI and zeta-potential of the system after each deposition of a layer, as shown in . The particle size of Spe-loaded liposomes was approximately 150 nm. And the size was increased to about 170 nm after deposition of the PBAE layer. After the deposition of the anionic NaAIg layer, the particle size increased to 198 nm . The sustaining increase of particle size of the sample after each layer deposition suggested that the PBAE and NaAIg layers were successively coated on the surface of nanoparticles. showed that the PDI values of samples were increased from 0.114 to 0.202 (< 0.3) after deposition of functional layers, showing that the drug-loaded samples before and after the LbL process had good uniformity. To further confirm the successful deposition of each functional layer, the surface charge of samples at each step was recorded and shown in . The zeta-potential of Spe-loaded liposomes without coating was about −50.5 mV (negative charge), while it was significantly increased to +27.8 mV (positive charge) after coating of polycationic PBAE layer. After deposition of polyanionic NaAIg layer, the surface charge of Spe@HNPs was obviously decreased to about −55.0 mV (negative charge) again. Additionally, the characteristic peak at 1680 cm −1 was from the stretching vibration of amides in PAE, suggesting the successful deposition of the PBAE layer on the surface of liposomes. The characteristic peaks from 3200 cm −1 to 1680 cm −1 were attributed to the intermolecular hydrogen bonding and muti-molecule association, in DSPG, Chol, PBAE, and NaAIg, suggesting the successful deposition of the NaAIg layer on the surface of liposomes ( Figure S1 ). In summary, the complete charge reversal (negative-positive-negative) after each deposition of functional layer and FT-IR spectra of samples indicated that PBAE and NaAIg layers were successfully coated on the drug-loaded liposomes. Next, the morphology of Spe@HNPs was characterized using FE-SEM, as shown in . The Spe@HNPs exhibited uniformly spherical in shape with a reasonable particle size which was consistent with the result of DLS measurement. The size was slightly decreased, resulting from the lyophilization process for the FE-SEM test. Taken together, the bilayered multifunctional Spe@HNPs based on liposomes, PBAE and NaAIg layers were successfully prepared using the LbL process. Then, the drug loading capacity of hybrid nanoparticles was next evaluated. The drug loading contents and encapsulation efficiency of Spe@HNPs at different ratios of liposome to the drug were listed in . At the mass ratio of 1: 1 (Spe: liposome, m/m), the LC was 335.7 µg of drug per 1 mg liposome, and EE was about 33.0%. When the ratio in feed was increased to 2: 1, the LC was increased to 478.3 µg/mg, while the EE was decreased to 24.4% because of the excess unloaded drug. With the increase of drug in feed (3: 1), although the LC was slightly increased, the EE was lower than 20%. Therefore, the Spe@HNPs with the mass ratio of Spe: liposomes= 2: 1 were used for the following studies in this work. 3.2. PH-sensitivity and stability To analyze the pH-responsive property of Spe@HNPs, the hydrodynamic diameter, PDI and surface charge of Spe@HNPs at different pH conditions were measured, as shown in and Figure S2 . The particle size of Spe@HNPs was dramatically increased from about 200 nm to 300 nm with the decrease of pH from 8.0 to 4.0, especially in weakly acidic conditions . The PDI of Spe@HNPs displayed similar change trends, especially the change at the pH range of 7.0 to 6.0 ( Figure S2 ). The reason might be that the protonation of tertiary amine residues in the PBAE layer an acidic environment transferred the solubility of PBAE from hydrophobicity to hydrophilicity, leading to the swelling of Spe@HNPs that resulted in the increase of particle size. Furthermore, the surface charge of Spe@HNPs was increased from negative to positive, due to the ionization of tertiary amine residues in the PBAE layer . Collectively, the changes of particle size, PDI, and surface charge of Spe@HNPs dependent on the different pH proved that this bilayered system Spe@HNPs showed pH-sensitivity. The high stability of the drug delivery system is the precondition for clinic use. Herein, the serum stability of Spe@HNPs was evaluated. Firstly, the particle size and PDI of Spe@HNPs after incubation in PBS (pH 7.4) with 20% FBS at 37 °C were recorded every day, as shown in ). The particle size of Spe@HNPs was slightly increased from 200 nm to about 220 nm after 5-day incubation, suggesting the system had high serum stability. Additionally, the PDI values of Spe@HNPs were still less than 0.3 at 5 days. There was no significant increase in size and PDI. To further evaluate the stability of Spe@HNPs, the particle sizes and PDI values of Spe@HNPs in PBS or 5% glucose solution were measured, as shown in Figure S3 . No significant changes in particle size and PDI were observed after dilution by 1000-time, indicating that Spe@HNPs had high stability. All these findings demonstrated that the prepared Spe@HNPs had high serum stability, suggesting that this system could have a prolonged circulation time in the body which facilitated the accumulation of Spe@HNPs at the infection site. In summary, the bilayered system Spe@HNPs exhibited reasonable pH-sensitivity and high stability which might be used for drug delivery with a pH-triggered drug release profile. 3.3. In vitro pH-triggered drug release performance We next studied the drug release profiles of Spe@HNPs in PBS (pH 7.4, normal physiological condition) and weakly acidic buffer solution (pH 6.0, infectious microenvironment), as shown in . The drug release rate and accumulative release amount of Spe@HNPs at different pH conditions were obviously different as seen from the results. At pH 7.4, the drug release rate was slow, and the accumulative drug release amount was less than 30% for 10 h and about 33% for 24 h. At normal physiological conditions, the tertiary amine residues in PBAE were not ionized, and Spe@HNPs were compact. The drug molecules were protected well in the liposomes. In contrast, the drug release rate and accumulative release amount of Spe from Spe@HNPs were dramatically accelerated at pH 6.0. About 90% of the loaded drug was released from HNPs at 10 h, and almost all of the drug was released at 24 h. The reason could be that the tertiary amine residues in PBAE were fully protonated at pH 6.0 which led to the swelling of HNPs, resulting in a rapid drug release rate and enhanced accumulative release amount. In addition, the acidic external environment also facilitated the drug release in comparison to normal physiological conditions. The drug release performance of Spe@HNPs also suggested that the HNPs were pH-sensitive which was consistent with the results in . In summary, the prepared Spe@HNPs showed pH-triggered drug release profiles, and acid could significantly enhance the drug release rate and accumulative release amount. This specific property could be used for drug-controlled release on-demand. 3.4. Antimicrobial efficacy in vitro Next, the antimicrobial efficacy of Spe@HNPs against S. aureus and drug-resistant bacterium MRSA BAA40 was evaluated, as shown in . The MIC values of free Spe for S. aureus were approximately 2 µg/mL, while the MIC of Spe@HNPs was less than 1 µg/mL. This result displayed that both free Spe and Spe@HNPs showed a high antimicrobial effect for S. aureus . However, for drug-resistant bacterium MRSA BAA40, the MIC of free Spe was not found (higher than 64 µg/mL), indicating the low antimicrobial activity and poor inhibition effect for MRSA BAA40. By contrast, Spe@HNPs still showed high antimicrobial activity with much lower MIC (4 µg/mL). Moreover, we completed the time-killing assay experiment ( Figure S4 ) to further evaluate the antimicrobial efficacy of Spe@HNPs in vitro . The results showed that both free Spe and Spe@HNPs could efficiently inhibit the growth of S. aureus compared with control. However, for MRSA BAA40, free Spe exhibited a negligible inhibition effect. By contrast, the Spe@HNPs can obviously inhibit the MRSA BAA40 compared with free Spe and control, showing the high antimicrobial efficacy of Spe@HNPs. The reason could be that the positively charged Spe@HNPs at weekly acidic conditions can interrupt the cytoplasmic membrane and cause the leakage of cytosol, followed by facilitating the pharmaceutical effect of formulation, resulting in the death of bacteria. This synergistic effect significantly improved the antimicrobial efficacy of Spe@HNPs. In order to satisfy the requirement of the biomedical application, the system should have a negligible toxic effect. Therefore, the in vitro cytotoxicity of free Spe, HNPs, and Spe@HNPs against NIH 3T3 cells was measured, as shown in Figure S5 . The cytotoxicity of HNPs was slightly increased with the increase of concentration. The cell viability was still approximately 90% even at the highest concentration of 500 µg/mL, indicating the negligible toxic effect of blank HNPs. For free Spe, the cell viability of NIH 3T3 was obviously decreased with the concentration increase. The cell viability was about 80% when the concentration of free Spe was 100 µg/mL. Less than 50% of cells were alive when the concentration was higher than 300 µg/mL. In contrast, the cytotoxicity of Spe was obviously reduced after formulation in Spe@HNPs. At the highest concentration of 500 µg/mL, the cell viability was still higher than 80%. Summarily, the prepared Spe@HNPs could effectively induce the death of drug-resistant bacterium with negligible cytotoxicity. 3.5. Therapeutic efficacy of Spe@HNPs for ALI Next, we investigated whether the Spe@HNPs could eliminate the bacteria after MRSA BAA40 was directly administrated to the mouse lung. At 4 h post-administration of bacteria into the lung, free Spe, HNPs, and Spe@HNPs were intravenously ( i.v. ) injected into the ALI-bearing mice. At 24 h, the BALF was collected and analyzed to evaluate the therapeutic efficacy, as shown in . The CFU in BALF of mice treated with Spe@HNPs was remarkably decreased compared with free Spe treatment and controls , demonstrating that the bacterial proliferation was effectively inhibited by systemic administration of Spe@HNPs. The number of infiltrated leukocytes in BALF was also recorded after different treatments . The free Spe treatment could slightly remit the infiltration of leukocytes, while the Spe@HNPs treatment was able to sharply reduce the number of infiltrated leukocytes. Moreover, the histological studies of lungs after Spe@HNPs treatment for 20 h ( Figure S6 ) displayed that the leukocyte infiltration was obviously decreased compared with other controls, indicating the reduced inflammation level. Furthermore, the inflammatory factors (TNF- α , IL-1 β , and IL-6, ) were obviously decreased after Spe@HNPs treatment in comparison to others. These results proved that the inflammation of mice treated with Spe@HNPs was well mitigated, showing the reduction of bacterial burden in the lungs. As reported, the protein permeability in the lung was associated with the vasculature integrity, and the low vasculature integrity suggested severe inflammation (Mehta and Malik, ; Molinaro et al., ; Zhang et al., ). As shown in , the protein content in BALF of mice treated with Spe@HNPs was much lower compared with free Spe and other treatments, indicating that the lung vasculature was repaired after the invasive bacteria were removed. In addition, the blank carrier HNPs also exhibited a slightly therapeutic effect compared with control, possibly due to the positive charge on the surface of NPs which broke the bacterial membrane and induced the death of bacteria. Furthermore, the plasma Spe concentration as a function of time was examined by intravenous injection ( i.v. ) of various formulations to healthy mice. Figure S7 showed the pharmacokinetics (PK) of free Spe and Spe@HNPs in vivo . The blood half-life ( t 1/2 ) of free Spe was less than 0.5 h, showing the free Spe molecules were rapidly cleared from the blood which might lead to poor therapeutic efficacy. In contrast, Spe@HNPs had prolonged blood circulation time ( t 1/2 = 5 h) due to the protection of HNPs, indicating the enhanced accumulation of system and high concentration of Spe in the lung which would lead to the higher therapeutic efficacy. The biosafety of Spe@HNPs was preliminarily evaluated here to prove the potential use in the clinic, as shown in Figures S8 and S9 . The results of blood biochemistry analysis ( Figure S8 ) indicated that the heart function marker (CK), hepatic function markers (ALT, AST), and renal function markers (CREA, BUN) in the Spe@HNPs group exhibited negligible difference compared with the normal group. The weight of major organs (especially lung) treated with free Spe@HNPs showed no difference compared with the normal group. However, the weight of lungs of mice treated with free Spe was significantly decreased in comparison to those of normal and Spe@HNPs groups. These results suggested the high biosafety of Spe@HNPs with high therapeutic efficacy. Taken together, the prepared Spe@HNPs could remarkably improve the therapeutic efficacy for a drug-resistant bacterium-induced acute lung infection and mitigate the inflammation response with reduced side-effect.
Preparation and characterization of Spe@HNPs The bilayered drug-loaded liposome-polymer hybrid nanoparticles (Spe@HNPs) were prepared using the film hydration method, extrusion, and layer-by-layer processes. Firstly, the liposomes were prepared based on DSPG and Chol at a mass ratio of 3:1, followed by loading the drug via the pH-gradient method. The pH in the liposomal core was about 4.0, while the pH outside was adjusted to about 6.5. The solubility of the drug was significantly increased at pH 4.0 compared with that at pH 6.5 due to the protonation of amine residues in the spectinomycin. Then, the PBAE layer and NaAIg layers were successively deposited on the surface of Spe-loaded liposomes via the LbL process through the polyelectronic interaction. This process was recorded by measurement of hydrodynamic diameter, PDI and zeta-potential of the system after each deposition of a layer, as shown in . The particle size of Spe-loaded liposomes was approximately 150 nm. And the size was increased to about 170 nm after deposition of the PBAE layer. After the deposition of the anionic NaAIg layer, the particle size increased to 198 nm . The sustaining increase of particle size of the sample after each layer deposition suggested that the PBAE and NaAIg layers were successively coated on the surface of nanoparticles. showed that the PDI values of samples were increased from 0.114 to 0.202 (< 0.3) after deposition of functional layers, showing that the drug-loaded samples before and after the LbL process had good uniformity. To further confirm the successful deposition of each functional layer, the surface charge of samples at each step was recorded and shown in . The zeta-potential of Spe-loaded liposomes without coating was about −50.5 mV (negative charge), while it was significantly increased to +27.8 mV (positive charge) after coating of polycationic PBAE layer. After deposition of polyanionic NaAIg layer, the surface charge of Spe@HNPs was obviously decreased to about −55.0 mV (negative charge) again. Additionally, the characteristic peak at 1680 cm −1 was from the stretching vibration of amides in PAE, suggesting the successful deposition of the PBAE layer on the surface of liposomes. The characteristic peaks from 3200 cm −1 to 1680 cm −1 were attributed to the intermolecular hydrogen bonding and muti-molecule association, in DSPG, Chol, PBAE, and NaAIg, suggesting the successful deposition of the NaAIg layer on the surface of liposomes ( Figure S1 ). In summary, the complete charge reversal (negative-positive-negative) after each deposition of functional layer and FT-IR spectra of samples indicated that PBAE and NaAIg layers were successfully coated on the drug-loaded liposomes. Next, the morphology of Spe@HNPs was characterized using FE-SEM, as shown in . The Spe@HNPs exhibited uniformly spherical in shape with a reasonable particle size which was consistent with the result of DLS measurement. The size was slightly decreased, resulting from the lyophilization process for the FE-SEM test. Taken together, the bilayered multifunctional Spe@HNPs based on liposomes, PBAE and NaAIg layers were successfully prepared using the LbL process. Then, the drug loading capacity of hybrid nanoparticles was next evaluated. The drug loading contents and encapsulation efficiency of Spe@HNPs at different ratios of liposome to the drug were listed in . At the mass ratio of 1: 1 (Spe: liposome, m/m), the LC was 335.7 µg of drug per 1 mg liposome, and EE was about 33.0%. When the ratio in feed was increased to 2: 1, the LC was increased to 478.3 µg/mg, while the EE was decreased to 24.4% because of the excess unloaded drug. With the increase of drug in feed (3: 1), although the LC was slightly increased, the EE was lower than 20%. Therefore, the Spe@HNPs with the mass ratio of Spe: liposomes= 2: 1 were used for the following studies in this work.
PH-sensitivity and stability To analyze the pH-responsive property of Spe@HNPs, the hydrodynamic diameter, PDI and surface charge of Spe@HNPs at different pH conditions were measured, as shown in and Figure S2 . The particle size of Spe@HNPs was dramatically increased from about 200 nm to 300 nm with the decrease of pH from 8.0 to 4.0, especially in weakly acidic conditions . The PDI of Spe@HNPs displayed similar change trends, especially the change at the pH range of 7.0 to 6.0 ( Figure S2 ). The reason might be that the protonation of tertiary amine residues in the PBAE layer an acidic environment transferred the solubility of PBAE from hydrophobicity to hydrophilicity, leading to the swelling of Spe@HNPs that resulted in the increase of particle size. Furthermore, the surface charge of Spe@HNPs was increased from negative to positive, due to the ionization of tertiary amine residues in the PBAE layer . Collectively, the changes of particle size, PDI, and surface charge of Spe@HNPs dependent on the different pH proved that this bilayered system Spe@HNPs showed pH-sensitivity. The high stability of the drug delivery system is the precondition for clinic use. Herein, the serum stability of Spe@HNPs was evaluated. Firstly, the particle size and PDI of Spe@HNPs after incubation in PBS (pH 7.4) with 20% FBS at 37 °C were recorded every day, as shown in ). The particle size of Spe@HNPs was slightly increased from 200 nm to about 220 nm after 5-day incubation, suggesting the system had high serum stability. Additionally, the PDI values of Spe@HNPs were still less than 0.3 at 5 days. There was no significant increase in size and PDI. To further evaluate the stability of Spe@HNPs, the particle sizes and PDI values of Spe@HNPs in PBS or 5% glucose solution were measured, as shown in Figure S3 . No significant changes in particle size and PDI were observed after dilution by 1000-time, indicating that Spe@HNPs had high stability. All these findings demonstrated that the prepared Spe@HNPs had high serum stability, suggesting that this system could have a prolonged circulation time in the body which facilitated the accumulation of Spe@HNPs at the infection site. In summary, the bilayered system Spe@HNPs exhibited reasonable pH-sensitivity and high stability which might be used for drug delivery with a pH-triggered drug release profile.
In vitro pH-triggered drug release performance We next studied the drug release profiles of Spe@HNPs in PBS (pH 7.4, normal physiological condition) and weakly acidic buffer solution (pH 6.0, infectious microenvironment), as shown in . The drug release rate and accumulative release amount of Spe@HNPs at different pH conditions were obviously different as seen from the results. At pH 7.4, the drug release rate was slow, and the accumulative drug release amount was less than 30% for 10 h and about 33% for 24 h. At normal physiological conditions, the tertiary amine residues in PBAE were not ionized, and Spe@HNPs were compact. The drug molecules were protected well in the liposomes. In contrast, the drug release rate and accumulative release amount of Spe from Spe@HNPs were dramatically accelerated at pH 6.0. About 90% of the loaded drug was released from HNPs at 10 h, and almost all of the drug was released at 24 h. The reason could be that the tertiary amine residues in PBAE were fully protonated at pH 6.0 which led to the swelling of HNPs, resulting in a rapid drug release rate and enhanced accumulative release amount. In addition, the acidic external environment also facilitated the drug release in comparison to normal physiological conditions. The drug release performance of Spe@HNPs also suggested that the HNPs were pH-sensitive which was consistent with the results in . In summary, the prepared Spe@HNPs showed pH-triggered drug release profiles, and acid could significantly enhance the drug release rate and accumulative release amount. This specific property could be used for drug-controlled release on-demand.
Antimicrobial efficacy in vitro Next, the antimicrobial efficacy of Spe@HNPs against S. aureus and drug-resistant bacterium MRSA BAA40 was evaluated, as shown in . The MIC values of free Spe for S. aureus were approximately 2 µg/mL, while the MIC of Spe@HNPs was less than 1 µg/mL. This result displayed that both free Spe and Spe@HNPs showed a high antimicrobial effect for S. aureus . However, for drug-resistant bacterium MRSA BAA40, the MIC of free Spe was not found (higher than 64 µg/mL), indicating the low antimicrobial activity and poor inhibition effect for MRSA BAA40. By contrast, Spe@HNPs still showed high antimicrobial activity with much lower MIC (4 µg/mL). Moreover, we completed the time-killing assay experiment ( Figure S4 ) to further evaluate the antimicrobial efficacy of Spe@HNPs in vitro . The results showed that both free Spe and Spe@HNPs could efficiently inhibit the growth of S. aureus compared with control. However, for MRSA BAA40, free Spe exhibited a negligible inhibition effect. By contrast, the Spe@HNPs can obviously inhibit the MRSA BAA40 compared with free Spe and control, showing the high antimicrobial efficacy of Spe@HNPs. The reason could be that the positively charged Spe@HNPs at weekly acidic conditions can interrupt the cytoplasmic membrane and cause the leakage of cytosol, followed by facilitating the pharmaceutical effect of formulation, resulting in the death of bacteria. This synergistic effect significantly improved the antimicrobial efficacy of Spe@HNPs. In order to satisfy the requirement of the biomedical application, the system should have a negligible toxic effect. Therefore, the in vitro cytotoxicity of free Spe, HNPs, and Spe@HNPs against NIH 3T3 cells was measured, as shown in Figure S5 . The cytotoxicity of HNPs was slightly increased with the increase of concentration. The cell viability was still approximately 90% even at the highest concentration of 500 µg/mL, indicating the negligible toxic effect of blank HNPs. For free Spe, the cell viability of NIH 3T3 was obviously decreased with the concentration increase. The cell viability was about 80% when the concentration of free Spe was 100 µg/mL. Less than 50% of cells were alive when the concentration was higher than 300 µg/mL. In contrast, the cytotoxicity of Spe was obviously reduced after formulation in Spe@HNPs. At the highest concentration of 500 µg/mL, the cell viability was still higher than 80%. Summarily, the prepared Spe@HNPs could effectively induce the death of drug-resistant bacterium with negligible cytotoxicity.
Therapeutic efficacy of Spe@HNPs for ALI Next, we investigated whether the Spe@HNPs could eliminate the bacteria after MRSA BAA40 was directly administrated to the mouse lung. At 4 h post-administration of bacteria into the lung, free Spe, HNPs, and Spe@HNPs were intravenously ( i.v. ) injected into the ALI-bearing mice. At 24 h, the BALF was collected and analyzed to evaluate the therapeutic efficacy, as shown in . The CFU in BALF of mice treated with Spe@HNPs was remarkably decreased compared with free Spe treatment and controls , demonstrating that the bacterial proliferation was effectively inhibited by systemic administration of Spe@HNPs. The number of infiltrated leukocytes in BALF was also recorded after different treatments . The free Spe treatment could slightly remit the infiltration of leukocytes, while the Spe@HNPs treatment was able to sharply reduce the number of infiltrated leukocytes. Moreover, the histological studies of lungs after Spe@HNPs treatment for 20 h ( Figure S6 ) displayed that the leukocyte infiltration was obviously decreased compared with other controls, indicating the reduced inflammation level. Furthermore, the inflammatory factors (TNF- α , IL-1 β , and IL-6, ) were obviously decreased after Spe@HNPs treatment in comparison to others. These results proved that the inflammation of mice treated with Spe@HNPs was well mitigated, showing the reduction of bacterial burden in the lungs. As reported, the protein permeability in the lung was associated with the vasculature integrity, and the low vasculature integrity suggested severe inflammation (Mehta and Malik, ; Molinaro et al., ; Zhang et al., ). As shown in , the protein content in BALF of mice treated with Spe@HNPs was much lower compared with free Spe and other treatments, indicating that the lung vasculature was repaired after the invasive bacteria were removed. In addition, the blank carrier HNPs also exhibited a slightly therapeutic effect compared with control, possibly due to the positive charge on the surface of NPs which broke the bacterial membrane and induced the death of bacteria. Furthermore, the plasma Spe concentration as a function of time was examined by intravenous injection ( i.v. ) of various formulations to healthy mice. Figure S7 showed the pharmacokinetics (PK) of free Spe and Spe@HNPs in vivo . The blood half-life ( t 1/2 ) of free Spe was less than 0.5 h, showing the free Spe molecules were rapidly cleared from the blood which might lead to poor therapeutic efficacy. In contrast, Spe@HNPs had prolonged blood circulation time ( t 1/2 = 5 h) due to the protection of HNPs, indicating the enhanced accumulation of system and high concentration of Spe in the lung which would lead to the higher therapeutic efficacy. The biosafety of Spe@HNPs was preliminarily evaluated here to prove the potential use in the clinic, as shown in Figures S8 and S9 . The results of blood biochemistry analysis ( Figure S8 ) indicated that the heart function marker (CK), hepatic function markers (ALT, AST), and renal function markers (CREA, BUN) in the Spe@HNPs group exhibited negligible difference compared with the normal group. The weight of major organs (especially lung) treated with free Spe@HNPs showed no difference compared with the normal group. However, the weight of lungs of mice treated with free Spe was significantly decreased in comparison to those of normal and Spe@HNPs groups. These results suggested the high biosafety of Spe@HNPs with high therapeutic efficacy. Taken together, the prepared Spe@HNPs could remarkably improve the therapeutic efficacy for a drug-resistant bacterium-induced acute lung infection and mitigate the inflammation response with reduced side-effect.
Conclusion In summary, we have successfully developed a pH-responsive drug delivery system (Spe@HNPs) composition of liposomes loaded with antibiotics and coated with PBAE/NaAIg layers using extrusion and layer-by-layer processes. The liposomes were prepared by lipids film hydration method, followed by loading drug through the pH-gradient method. Then, the positively charged PBAE and negatively charged NaAIg layers were coated on the surface via a layer-by-layer process. These Spe@HNPs can passively deposit at the infection site and release the drug by responding to the acid at the infectious microenvironment after intravenous administration, followed by eliminating the bacteria and treating the mouse lung infection (ALI). Spe@HNPs can effectively treat the drug-resistant bacterium-induced ALI compared with free drugs. The reason might be that Spe@HNPs with positive surface charge due to the protonation of tertiary amine residues in the PBAE layer under an acidic environment could break the bacterial cell wall, induce the death of bacteria and exhibit the synergetic effect with the drug. This work not only reports a promising nanomedicine for ALI treatment but also provides an effective approach to fabricate a multi-functional multi-layers nanosystem for drug delivery and controlled release. The mechanism of synergetic effect from polycationic polymer-based carrier and antibiotic is very important for the development of therapeutics for antimicrobial resistance. And we would be focused on this in the future.
|
Postmortem Biochemistry and Immunohistochemistry in Anaphylactic Death Due to Hymenoptera Sting: A Forensic Case Report | 73a4aa05-f565-4182-af28-639aee850681 | 10177871 | Anatomy[mh] | Anaphylaxis is commonly known as a “severe, life-threatening generalized or systemic hypersensitivity reaction” and can occur both with immunological and non-immunological mechanisms . An anaphylactic reaction is an acute IgE-mediated hypersensitivity response mediated by inflammatory mediators released in systemic circulation from mast cells and basophil; an anaphylactoid reaction has non-IgE mediated mechanisms and, usually, is clinically indistinguishable from an IgE-mediated reaction . The most common triggers of anaphylaxis are drugs, food, and insect venom, and among these, Hymenoptera stings are quite represented . The Hymenoptera order is classified into three families: bees (Apidae), wasps (Vespidae), and ants (Formicidae). These arthropods can sting humans, having the potential to cause anaphylactic and non-anaphylactic reactions. Honeybees and bumblebees have barbed stingers and generally sting only if provoked; they characteristically die after a single sting. Wasps, hornets, and most yellow jackets have no barbed stingers and can sting many times. They are usually more aggressive than bees and also sting without any provocation . Hymenoptera toxins contain various complexes of peptides, enzymes, proteins, and chemicals, and they cause cellular injury via several mechanisms . Studies on the effect of these molecules demonstrated an action similar to toxins, hormones, antibiotics, and defensins which are able to interact with different pharmacological targets, causing inflammation, pain, changes in blood pressure and heart rhythm until cardiac arrhythmia, and neurotoxicity, and are even able to lead to death . It is important to underline that in the evaluation of deaths probably related to a bee sting, it is not always possible to appreciate macroscopic signs of the sting . In fact, in some cases, the sign of the puncture can be difficult to locate, or it is absent. Moreover, if death occurs in a very short time, no local reaction can be found . Therefore, in such cases, the postmortem assessment of the cause of death needs biochemical and immunohistochemical investigations that, together with circumstantial data, clinical data, autopsy, and routine histological findings, can provide useful evidence . Here, the authors report a case of anaphylactic death due to Hymenoptera stings to highlight the contribution of several forensic investigations in assessing the cause of death.
The case regards a 59-year-old Caucasian man with a history of previous sensitization to Hymenoptera sting, the result of which was that, a few years earlier, he had facial edema due to both bee and wasp stings, then confirmed by skin tests. Anamnestic data were negative for cardiovascular and respiratory diseases. On the day of his death, the man contacted an employee by phone, asking for help and reporting that he was probably stung by bees. Once arrived, the employee found the man unconscious, lying on the ground, and with a transparent liquid coming out of his mouth, and contacted an ambulance. The medical staff found the man in cardiorespiratory arrest and began resuscitation, but the man died. The autopsy was performed at 24 h after death. Body inspection showed no signs of bee puncture. The gross examination revealed mild edema of the larynx, whitish foamy liquid in the bronchial tree, and red-brownish foamy dense liquid in the lungs. The routine histology was also performed, showing subacute pulmonary emphysema, endo-alveolar edema and hemorrhage, marked congestion of the interalveolar septa, bronchospasm, and scattered bronchial obstruction due to mucus hyperproduction ( A–C). Myocardial tissue showed hypertrophic myocytes, myofiber break up, and foci of wavy fibers ( D–E); atherosclerotic plaques were observed in coronary arteries. The toxicological investigations performed were negative for alcohol, abused substances, and psychotropic drugs. Biochemical investigations have been performed on the peripheral blood (femoral vein), showing an increased level of tryptase equal to 189 µg/L, troponin I 100,000 pg/mL, and proBNP 579 pg/mL. In addition, the Immuno-CAP method was applied for the determination of total IgE antibodies, which was found to be equal to 200 kU/L. ImmunoCap (Thermo Fisher Scientific/Phadia, Uppsala, Sweden) was carried out for the specific IgE dosage against honey bee (i1), white-faced hornet (i2), common wasp (Yellow Jacket–i3), paper wasp (i4) and yellow hornet (i5); the analysis allowed the identification of honey bee IgE equal to 5.30 kUA/L and yellow jacket IgE of 3.00 kUA/L. For immunohistochemical procedures, 4-micron thick sections obtained from larynx, lung, heart, and spleen tissue blocks were deparaffinized, then washed in descending alcohol scale, treated with 3% hydrogen peroxide for 10 min, washed again in deionized water three times, and incubated with normal sheep serum to prevent unspecific adherence of serum proteins for 30 min at room temperature. After, slides were washed with deionized water and incubated for 30 min at 37 °C with primary anti-human antisera monoclonal mouse anti-tryptase antibody (Roche Diagnostics code 760-4276). Next, the sections were washed three times with PBS and incubated with a biotinylated goat anti-mouse IgG secondary antibody (1:300; Abcam, code ab7064) for 20 min at room temperature, subsequently incubated with horseradish peroxidase-labeled secondary antibody for 30 min, developed with diaminobenzidine tetrahydrochloride, and counterstained with hematoxylin using the ULTRA Staining system (Ventana Medical Systems). Negative controls were obtained by omitting the specific antisera and substituting PBS for the primary antibody. Immunohistochemical reaction revealed intense expression in the larynx and lungs, showing several immunopositive mast cells and spread immunopositivity for degranulated tryptase ( A–D); mild expression in both coronary arteries’ walls and myocardial tissue characterized by scattered positive mast cells and foci of tryptase degranulation ( E–G); and mild positivity in splenic tissue showing mast cells and spread degranulated tryptase expression ( H).
The global incidence of anaphylaxis is reported between 50 and 112 episodes per 100.000 person years with a low mortality rate, estimated at 0.05–0.51 per million people/year for drugs, at 0.03–0.32 for food, and at 0.09–0.13 for venom . In Italy, Bilò et al. reported 392 cases of death from anaphylaxis, with a mortality rate of 0.51 per million people per year. Hymenoptera stings were responsible for 5.6% of these deaths, with an overall mortality rate of 0.17 per million people per year. Even if Hymenoptera stings are a frequent cause of anaphylactic reactions, there is a consistent number of related deaths that cannot be correctly identified due to the difficulty of making a postmortem diagnosis . To perform the diagnosis of death due to anaphylactic shock, it is necessary to integrate circumstantial and anamnestic data, autopsies, and histological findings . However, postmortem assessment of anaphylaxis as the cause of death is considered a challenge for forensic pathologists, because evidence emerging from autopsy and histology is often unspecific. In this context, other postmortem analyses, such as biochemistry and immunohistochemistry, can provide a useful contribution. The subject’s clinical history serves to collect information on both previous allergic reactions and sensitization phenomena to specific allergens; likewise, the circumstances of death play an important role in the forensic analysis of the case . Relevant findings can be provided from autopsy and routine histology. Sting signs and the evidence emerging from gross and microscopic analysis of the respiratory system (such as laryngeal edema, tracheo-bronchial hypersecretion, bronchoconstriction, emphysema and acute pulmonary edema, congestion, and intra-alveolar hemorrhage) support the occurrence of anaphylaxis . However, some of these findings could not be found and, even if identified, cannot be considered pathognomonic and specific. In fact, such respiratory system involvement has also been described in asthma . Many researchers have suggested the use of biochemistry and immunohistochemistry to fill the gaps related to the poor or absent autopsy and histological data in performing the postmortem diagnosis of anaphylaxis . Particularly, blood biochemical investigations to evaluate tryptase and IgE are described as useful tests to confirm deaths related to anaphylactic reactions due to bee venom, especially when there are no evident signs of stings . Serum tryptase is a neutral protease of human mast cells, mostly used as a biomarker to better define the postmortem diagnosis of anaphylaxis . Tryptase is a very stable enzyme and can be detected up to 6 days after death . Nevertheless, it must be emphasized that postmortem degradation processes can cause a reduction in the real concentration of tryptase proportionally to the increase in the postmortem interval (PMI). Therefore, if there is a suspicion of death related to anaphylaxis, it is suggested to collect a blood sample as soon as possible . Forensic literature reports variable cut-offs for serum tryptase from peripheral blood. Meyer et al. demonstrated that a level of tryptase of 10 μg/L or greater has a sensitivity of 86% and specificity of 88% for the diagnosis of postmortem anaphylaxis. Tse et al. reported a cut-off value of tryptase ≥53.8 μg/L on peripheral blood taken from the femoral vessels to make a postmortem diagnosis of anaphylaxis-related death. Edston et al. have proposed a value of 45 μg/L as a new cut-off point, especially if death is due to insect stings. The literature also offers evidence on serum tryptase measurement on blood taken from central vessels, such as the aorta, in which the suggested cut-off value is 110 µg/L . However, it was highlighted that prolonged cardiac massage or defibrillation can determine the increase in mast cell degranulation and the increase in tryptase levels, due to visceral trauma from chest compressions . In general, several studies suggest preferring peripheric blood sampling for the postmortem tryptase assay . Nevertheless, factors affecting the tryptase concentration (i.e., hemolysis, length of agonal period, the specific type of trauma, and cause of death) should always be considered in forensic practice . In fact, the increase in tryptase levels was also described in non-anaphylactic deaths, such as sudden infant death syndrome, acute deaths after heroin injection, traumatic deaths, and asphyxia . The serum concentration of tryptase found in the femoral blood of the case presented here was equal to 189 µg/L and supported the occurrence of anaphylaxis. In the presented case, other useful data were obtained from the analysis of total and specific IgE. Particularly, the analysis revealed a total IgE value of 200 kU/L and the presence of specific IgE for honey bee (5.30 kUA/L) and yellow jacket species (3.00 kUA/L), demonstrating a high (Radio-Allergo-Sorbent-Test class 4) and moderate (Radio-Allergo-Sorbent-Test class 3) level of sensibilization, respectively. Evidence in the literature suggests combining results of both mast cell tryptase and allergen-specific IgE and/or total IgE assay in postmortem serum to support the assessment of IgE-mediated fatal anaphylaxis . Even if few forensic studies investigated the behavior of postmortem serum total and specific IgE, some evidence showed relative stability of the antibodies in peripheral blood; albeit, some authors reported an increase in total IgE level proportionally with the postmortem interval . It is also important to observe that the measurement of serum IgE provides information about the atopic disposition and degree of sensitization to a particular allergen; thus, this cannot be considered a confirmation of the causal link between IgE-mediated anaphylaxis and death . Immunohistochemistry is another investigation useful for postmortem diagnosis of anaphylaxis. Many studies focused on the role of mast cell and tryptase detection in tissues, among which are bronchial, respiratory, and intestinal mucosa, and the red pulp of the spleen and connective tissue (i.e., cutaneous and perivascular) . Nevertheless, it is important to underline that the identification of mast cells in the tissues cannot be considered sufficient to make a diagnosis of certainty. These limits are related to (i) the involvement of mast cells in various biological processes (i.e., tissue remodeling, angiogenesis, fibrosis, and asphyxia), (ii) the physiological interindividual variability in the number of mast cells, and (iii) the increased detection also observed in non-anaphylactic death . Particularly, Edston et al. reported a similar number of pulmonary mast cells in both anaphylactic deaths and control cases (cardiovascular deaths), whereas a higher expression of spleen mast cells was observed in anaphylactic deaths rather than in controls. The immunohistochemical analysis performed in the presented case revealed an intense positivity of mast cells and degranulated tryptase in the larynx and lungs, together with a mild marker expression in the spleen. These expression patterns are in accordance with the evidence in the literature and support an anaphylactic death. Moreover, the immunohistochemical findings observed in the heart, together with the increased level of serum troponin and pro-BNP, could suggest a coronary hypersensitivity similar to that described in Kounis syndrome. This morbidity is associated with allergic, hypersensitivity, anaphylactic, and anaphylactoid reactions, and it is classified into three types . The type I variant, known as vasospastic allergic angina, is characterized by endothelial dysfunction or microvascular angina and occurs in subjects with normal or nearly normal coronary arteries and in the absence of predisposing factors for coronary artery disease; the releasing of inflammatory mediators due to anaphylaxis can cause coronary artery spasm until myocardial injury with impaired cardiac enzymes and troponins. The type II variant, also known as allergic myocardial infarction, has been described in subjects with quiescent pre-existing atheromatous disease. In this case, the acute release of inflammatory mediators can provoke both coronary artery spasms with normal cardiac enzymes and troponins or coronary artery spasms associated with plaque erosion or rupture. The type III variant occurs in patients with coronary artery stent in whom the inflammatory reactions cause a prothrombotic response and the stent thrombosis; eosinophils and mast cells are generally detected in thrombi and coronary wall at histological examination . Therefore, in Kounis syndrome, the myocardial damage seems related to the effect of both mast-cell degranulation and the release of inflammatory mediators that affect the cardiovascular system (i.e., coronary vasoconstriction induced from histamine) . Few cases of Kounis syndrome due to bee and wasp stings have been described in the literature . The case argumentation highlights that postmortem diagnosis of anaphylactic death is based on a combination of data about the event, medical history, gross and microscopic examination, and blood serum analyses. Particularly, even if tryptase analysis by biochemistry and immunohistochemistry and IgE dosage have limits in specificity and sensitivity, their integration with the other information is fundamental to performing a differential diagnosis and, thus, to assessing anaphylaxis . Moreover, a prompt sampling, performed as soon as possible, is crucial to prevent the effect of postmortem phenomena (i.e., cell lysis) on tryptase .
In conclusion, data emerging from the forensic investigations lead to assessing the cause of death as an anaphylactic shock due to Hymenoptera stings affecting the respiratory system and the cardio-circulatory system, with possible vasospastic involvement of coronaries. The case described here supports the importance of circumstantial data in guiding postmortem investigations, especially if no external signs attributable to the insect bite and/or unspecific autopsy and histological evidence are found. Moreover, the important role of biochemistry and immunohistochemistry in demonstrating the anaphylactic reaction has been described, suggesting that these investigations should be routinely implemented in forensic practice when anaphylaxis is suspected.
|
An overview of neuro-ophthalmic disorders at Jenna Ophthalmic Center, Baghdad, Iraq (2021-2022) | 8231bfae-7386-4377-8bfb-37ae1ce368a7 | 11080512 | Ophthalmology[mh] | The field of neuro-ophthalmology emerged as a recognized medical specialization in the 1960s and has shown significant growth in subsequent years . Neuro-ophthalmology combines the disciplines of neuroscience and ophthalmology, focusing on studying disorders of the neurological system manifesting as visual dysfunction . The visual pathways, which connect the retina to the visual cortex, and the oculomotor system, which links the eye muscles to the cortical centers, establish direct connections with a significant portion of the central nervous system. By evaluating these connections, neuro-ophthalmologists can make assumptions about the severity and specific location of impairments . Patients may exhibit various ocular manifestations, including diminished visual acuity, temporary visual impairment, double vision, atypical eye movements, abnormalities in eyelid function, irregularities in pupil size, and sometimes, perceptual distortions . Neuro-ophthalmic diseases are not very common, but they can have serious consequences and even be life-threatening . These diseases have a significant role in the development of ocular morbidity . Disorders affecting the optic nerve are often seen as contributing factors to the onset of blindness . These disorders can include a range of conditions, such as optic neuritis and atrophy resulting from diverse causes, papilledema, optic nerve malignancies, and other heterogeneous neuropathies . Proptosis may also be caused by malignancies affecting the optic nerve and meninges . Ocular motor nerve palsies are significant etiological factors contributing to the development of strabismus (commonly known as squint) and diplopia (double vision) . Neuro-ophthalmic diseases primarily affect two aspects of vision: (1) the afferent visual system, resulting in different types of visual dysfunction, and (2) the efferent path, resulting in central ocular-motor illnesses, ocular-motor cranial neuropathies, gaze instabilities, and pupillary illnesses . These conditions can also impact systemic functions related to the neuro-muscular junction or the muscles exterior to the eye . Changes in the sensory and motor pathways may arise from many circumstances, such as autoimmune disorders, infectious, inflammation, ischemic, traumatizing, compressive, inherited, or degenerative diseases . It is not uncommon for a neuro-ophthalmic dysfunction, such as inflammatory optic neuropathy, to serve as an early indication of an underlying neurological disorder, such as multiple sclerosis . Similarly, the optic nerve head (ONH) swelling may serve as the only indication of heightened intracranial pressure resulting from critical brain diseases that need immediate medical attention . Studies have been conducted in different countries to determine the incidence of specific neuro-ophthalmic diseases. For example, a study conducted in the United States found that the incidence rate of non-arteritic anterior ischemic optic neuropathy (NAION) among individuals aged 50 years and older was 10 per 100,000 in Olmsted County, Minnesota . Another study documented the yearly incidence rates of arteritic and nonarteritic anterior ischemic optic neuropathy (AION) as 0.36 and 2.30 per 100,000 individuals, respectively. These findings were specifically seen among patients 50 years of age or older . Optic neuropathies were the most frequent cause of neuro-ophthalmic disorders in a study conducted in France . The incidence of optic neuritis varies in different countries, with reported rates ranging from 1.03 per 100,000 in Japan, 1.46 per 100,000 in Sweden, and 1.60 per 100,000 in Croatia . However, there is limited research on the epidemiology of less prevalent disorders, such as Leber's optic neuropathy . Additionally, there is a lack of available data on the incidence of neuro-ophthalmic diseases in the Middle Eastern population and other Asian populations. Neuro-ophthalmic disorders are often documented individually for each illness, with little comprehensive data available on their overall incidence and pattern. The overall incidence of neuro-ophthalmic illnesses in Iraq is still not recorded. This research aimed to assess the clinical, demographic, and etiological characteristics of patients seeking consultation at a neuro-ophthalmology clinic in Iraq over one year. Study design and setting This study used a prospective cross-sectional observational methodology to examine the incidence of neuro-ophthalmic disorders in Iraq. The present investigation adhered to the STROBE guidelines for reporting cross-sectional observational research and the principles outlined in the Declaration of Helsinki for biomedical research . The study was conducted at a single center, Janna Ophthalmic Center, based in Baghdad, Iraq. The facility serves a diverse patient population from multiple governorates. The patient recruitment method included people who were attending the facility for routine follow-up appointments as well as those seeking medical counseling for ocular conditions. The selection of this facility was based on its attributes as being a hospital with a wide range of subspecialties, staffed by qualified medical professionals, equipped with advanced diagnostic tools, and an abundant patient load that closely reflects the general community. The study was conducted between March 2021 and November 2022. Inclusion criteria All newly diagnosed patients with neuro-ophthalmic illnesses regardless of gender or age group, who attended the neuroophthalmological clinic were included. Exclusion criteria Patients who missed scheduled appointments were excluded from the study as their absence could have impacted the accuracy and completeness of the data collected. Patients with psychological illnesses that may have affected their ability to provide reliable data were also excluded to ensure the validity of the study. Similarly, patients who were illiterate or had significant difficulty reading were excluded, as reading ability was crucial for understanding and completing data collection materials. Additionally, patients who were unable or refused to provide informed consent were excluded to uphold ethical considerations. Lastly, patients who refused to participate in the data collection process were also excluded, as their unwillingness to participate could have hindered the collection of necessary data. Patient assessment and data collection The initial manifestation of symptoms was determined based on self-reporting from patients. To establish a definitive diagnosis, the neuro-ophthalmologist performed a comprehensive evaluation, including a detailed medical history, physical examination, specific tests, and sometimes neuroimaging. The demographic characteristics, including age and gender, and the primary symptoms and duration, were documented. The primary clinical manifestations were documented in each instance. The measurement of distant visual acuity was conducted using Snellen's chart. In cases where the visual acuity was below 6/60, the individual's capacity to recognize finger counting, detect hand movement, or detect light was assessed. The external ocular examination was conducted with the pen-torch and the slit-lamp biomicroscope. A summary of devices and equipment used in the study is listed in . The evaluation of optic nerve lesions included color desaturation tests and visual field assessment. The spectrum of neuro-ophthalmic illnesses examined in this research included (1) central nystagmus, (2) congenital optic anomalies (myelinated nerve fiber layer [NFL], disc coloboma, disc hypoplasia), (3) cortical pathology (cerebrovascular accident [CVA], tumors, infection), (4) fourth nerve palsy, (5) functional visual loss, (6) headache syndromes (migraine, cluster, trigeminal neuralgia), (7) Leber hereditary optic neuropathy (Kjer Behr LHON), (8) ischemic optic neuropathy (NAION, AION, posterior ischemic optic neuropathy [PION] ), (9) miscellaneous, (10) multiple sclerosis (optic neuritis, internuclear ophthalmoplegia [INO], wall-eyed bilateral internuclear ophthalmoplegia [WEBINO]), (11) multiple nerve palsies (ophthalmoplegia), (12) myopathies (myasthenia, chronic progressive external ophthalmoplegia [CPEO], Botox), (13) Optic neuritis (not associated with multiple sclerosis), (14) papilledema, (15) pupil anomalies (Adies, traumatic, Horner), (16) seventh nerve palsy (acute only), (17) sixth nerve palsy, (18) third nerve palsy, (19) traumatic/compressive optic neuropathy. Bias The study participants frequently underwent an initial evaluation with a general ophthalmologist before being referred to the neuro-ophthalmology clinics. This approach was implemented to ensure that the study sample primarily consisted of individuals likely to receive an accurate diagnosis of a neuro-ophthalmic disorder. Additionally, this step aimed to minimize the potential bias arising from misclassification. Statistical analysis The statistical analyses were conducted using IBM SPSS Statistics for Windows (RRID: SCR_002865), version 23. The graphical illustrations were generated using GraphPad Prism 8 for Windows. The continuous and categorical data distributions were reported using the mean and standard deviation for continuous variables, and frequency and percentages for categorical variables. A Chi-square test was conducted to examine the relationship between demographic data and neuro-ophthalmic conditions. A P value less than 0.05 was considered statistically significant. This study used a prospective cross-sectional observational methodology to examine the incidence of neuro-ophthalmic disorders in Iraq. The present investigation adhered to the STROBE guidelines for reporting cross-sectional observational research and the principles outlined in the Declaration of Helsinki for biomedical research . The study was conducted at a single center, Janna Ophthalmic Center, based in Baghdad, Iraq. The facility serves a diverse patient population from multiple governorates. The patient recruitment method included people who were attending the facility for routine follow-up appointments as well as those seeking medical counseling for ocular conditions. The selection of this facility was based on its attributes as being a hospital with a wide range of subspecialties, staffed by qualified medical professionals, equipped with advanced diagnostic tools, and an abundant patient load that closely reflects the general community. The study was conducted between March 2021 and November 2022. All newly diagnosed patients with neuro-ophthalmic illnesses regardless of gender or age group, who attended the neuroophthalmological clinic were included. Patients who missed scheduled appointments were excluded from the study as their absence could have impacted the accuracy and completeness of the data collected. Patients with psychological illnesses that may have affected their ability to provide reliable data were also excluded to ensure the validity of the study. Similarly, patients who were illiterate or had significant difficulty reading were excluded, as reading ability was crucial for understanding and completing data collection materials. Additionally, patients who were unable or refused to provide informed consent were excluded to uphold ethical considerations. Lastly, patients who refused to participate in the data collection process were also excluded, as their unwillingness to participate could have hindered the collection of necessary data. The initial manifestation of symptoms was determined based on self-reporting from patients. To establish a definitive diagnosis, the neuro-ophthalmologist performed a comprehensive evaluation, including a detailed medical history, physical examination, specific tests, and sometimes neuroimaging. The demographic characteristics, including age and gender, and the primary symptoms and duration, were documented. The primary clinical manifestations were documented in each instance. The measurement of distant visual acuity was conducted using Snellen's chart. In cases where the visual acuity was below 6/60, the individual's capacity to recognize finger counting, detect hand movement, or detect light was assessed. The external ocular examination was conducted with the pen-torch and the slit-lamp biomicroscope. A summary of devices and equipment used in the study is listed in . The evaluation of optic nerve lesions included color desaturation tests and visual field assessment. The spectrum of neuro-ophthalmic illnesses examined in this research included (1) central nystagmus, (2) congenital optic anomalies (myelinated nerve fiber layer [NFL], disc coloboma, disc hypoplasia), (3) cortical pathology (cerebrovascular accident [CVA], tumors, infection), (4) fourth nerve palsy, (5) functional visual loss, (6) headache syndromes (migraine, cluster, trigeminal neuralgia), (7) Leber hereditary optic neuropathy (Kjer Behr LHON), (8) ischemic optic neuropathy (NAION, AION, posterior ischemic optic neuropathy [PION] ), (9) miscellaneous, (10) multiple sclerosis (optic neuritis, internuclear ophthalmoplegia [INO], wall-eyed bilateral internuclear ophthalmoplegia [WEBINO]), (11) multiple nerve palsies (ophthalmoplegia), (12) myopathies (myasthenia, chronic progressive external ophthalmoplegia [CPEO], Botox), (13) Optic neuritis (not associated with multiple sclerosis), (14) papilledema, (15) pupil anomalies (Adies, traumatic, Horner), (16) seventh nerve palsy (acute only), (17) sixth nerve palsy, (18) third nerve palsy, (19) traumatic/compressive optic neuropathy. The study participants frequently underwent an initial evaluation with a general ophthalmologist before being referred to the neuro-ophthalmology clinics. This approach was implemented to ensure that the study sample primarily consisted of individuals likely to receive an accurate diagnosis of a neuro-ophthalmic disorder. Additionally, this step aimed to minimize the potential bias arising from misclassification. The statistical analyses were conducted using IBM SPSS Statistics for Windows (RRID: SCR_002865), version 23. The graphical illustrations were generated using GraphPad Prism 8 for Windows. The continuous and categorical data distributions were reported using the mean and standard deviation for continuous variables, and frequency and percentages for categorical variables. A Chi-square test was conducted to examine the relationship between demographic data and neuro-ophthalmic conditions. A P value less than 0.05 was considered statistically significant. Over the 1.5-year study period, 6440 patients were referred to various specialist clinics at our institution. Of these, 613 cases were confirmed through consultation with the neuro-ophthalmology clinic, resulting in an incidence rate of 9.51%. The average age of participants was 38.52 ± 21.64 years, ranging from 1 to 88 years. Gender was evenly distributed, with 59.1% female and 49.1% male participants. Most of the patients (64.3%) were over the age of 30. The group also included pediatric patients, comprising 22.8% of the total population, as presented in and . Ischemic optic neuropathy (NAION, AION, PION) emerged as the prevailing diagnosis, accounting for 17.61% of newly reported cases in the field of neuro-ophthalmology. Following this, the prevalence rates of sixth nerve palsy, traumatic/compressive optic neuropathy, and papilledema were 10.92%, 8.8%, and 8.64%, respectively. This study revealed that traumatic/compressive optic neuropathy was the prevailing neuro-ophthalmic illness in individuals below the age of 50, whereas ischemic optic neuropathy (specifically NAION, AION, and PION) was the predominant condition among patients aged 50 and older, as seen in . A significant proportion of participants, including 48.6% in the left eye (OS) and 47% in the right eye (OD), had a visual acuity of 6/6, which indicates their ability to see details from a distance of 6 meters (20 feet) in a manner equivalent to those with normal vision. 12.1% of the patients had a visual acuity of 6/9 in the OS and 12.2% in the OD. The rest of the findings are presented in and . A total of 11.6% of the OS evaluated showed an ability to count fingers at a given distance. Similarly, 12.7% of the OD had the ability to count fingers at the same distance. This testing procedure is only used after establishing the patient's inability to recognize letters on the visual acuity chart. A total of 0.7% of the OS and 0.5% of the OD could perceive light. This particular testing approach is limited to cases when patients show little or insignificant progress in the Hand Motion test. During this examination, the examiner utilizes a penlight to illuminate the patient's pupil. Subsequently, the patient is instructed to either indicate the location of the light source or articulate the specific direction from which the light originates. In cases when the patient exhibits a complete absence of light perception, it is accepted to document this condition using the acronym NLP, which stands for no light perception. An individual who cannot see the light in one eye is classified as blind in that particular eye. When NLP is seen in both eyes, the patient is clinically diagnosed with complete blindness. The reported percentages of NLP in the left eye (OS) and right eye (OD) were 2.8% and 2.9%, respectively ( and ). There was no statistically significant difference seen in the measurements of the spherical equivalent between the left (OS) and the right eyes (OD) of the patients ( P = 0.602). The mean value of the spherical equivalent was -0.1403 in the left eyes and -0.1208 in the right eyes, as shown in and . The patterns of eye involvement in patients significantly differed across diagnoses. For conditions like cortical pathology and papilledema, there was a statistically significant tendency for both eyes to be affected rather than just one ( P < 0.05). On the other hand, multiple sclerosis (optic neuritis, INO, WEBINO) was predominantly detected in the left eye ( P < 0.05), while fourth nerve palsy was reported significantly on the right eye side, as presented in . The development of neuro-ophthalmic disorders was significantly influenced by medical background and patient-related factors. The findings of this study indicate that diabetes mellitus had a notable impact on 42.7% of the cases, followed by hypertension, which affected 39.3% of the individuals who had a medical history of these chronic diseases. The data reported in indicates that smoking had a detrimental effect on the development of neuro-ophthalmic diseases in 9% of the patients. The current research was conducted at the neuro-ophthalmology clinic, which is a division of the Janna Ophthalmology Center. Consequently, all participating patients had eye symptoms or impaired visual functions. The overall incidence of neuro-ophthalmic cases among new patients attending the ophthalmology clinic was 9.51%. The incidence rate was higher than that reported in a Nigerian research by Omoti et al ., who reported an incidence rate of 4.47% . A possible explanation for this variability is the prevalence of certain neuro-ophthalmic illnesses, which are more often seen in Asian populations compared to other ethnic groups . The natural environment, social and cultural, and psychological variables may also have an influence . Due to the expansive definition of neuro-ophthalmic illnesses and variations in the criteria used for evaluation, the frequency of certain disorders, as well as the total occurrence, might differ across different studies. There seems to be a positive correlation between advancing age and a higher incidence of neuro-ophthalmic illnesses. Research conducted in Singapore revealed a strong correlation between the occurrence and those over 40, whereas no association was seen with gender . The results presented are consistent with what was determined in the present research. Although this research did not find any statistically significant differences in relation to gender, it is worth noting that there was a slightly higher frequency identified among women. This disparity in prevalence rates between genders may be attributed to biological factors. For instance, the sex steroid hormone in females has significant effects on endothelial cells, leading to vasodilation and increased blood circulation. These actions play a crucial role in preventing ischemic diseases . However, the observed protective effect diminishes significantly in postmenopausal women, as they have inferior cerebrovascular responses compared to males of the same age . Consequently, the biological alteration seen in women aged 50 to 60 may potentially elevate the likelihood of non-arteritic anterior ischemic optic neuropathy and result in similar or slightly higher prevalence rates when compared to men. Furthermore, sex-specific risk factors may probably contribute to the observed disparity across genders. The male population, for example, has a greater propensity for smoking and alcohol usage, which correlates with an increased susceptibility to peripheral vascular disorders . On the other hand, women tend to exhibit elevated levels of plasma total cholesterol and low-density lipoprotein cholesterol, which leads to an increased susceptibility to atherosclerosis and thromboembolism compared to males . Research conducted over two years at a tertiary eye clinic in Nigeria documented 76 newly diagnosed cases of neuro-ophthalmic conditions. This accounts for about 4.47% of all new patients seen throughout the study. The study identified ocular motor palsies as the most prevalent neuro-ophthalmic disorder, accounting for 27.6% of cases. Optic neuropathies were the second most common condition, comprising 22.4% of cases, followed by migraine at 14.5%. The most frequently reported symptoms at presentation were impaired vision, reported by 39.5% of patients, followed by double vision (18.4%) and headache (17.1%) . Lim et al . conducted a study in Singapore to determine the yearly incidence of neuro-ophthalmic disorders. The study found an incidence rate of 9.81 per 100,000 individuals. The three most prevalent neuro-ophthalmic diseases identified were abducens nerve palsy, anterior ischemic optic neuropathy, and oculomotor nerve palsy, with incidence rates of 1.27, 1.08, and 0.91 per 100,000 individuals, respectively . The present study identified five increasing neuro-ophthalmic illnesses: ischemic optic neuropathy (including NAION, AION, and PION), sixth nerve palsy, cortical pathology (including CVA, tumors, and infection), traumatic/compressive optic neuropathy, and papilledema. The study of Lim et al . in Singapore revealed that abducens nerve palsy had the greatest incidence rate of 1.27 per 100,000 per year. This was followed by NAION with an incidence rate of 1.08 per 100,000, oculomotor nerve palsy with an incidence rate of 0.91 per 100,000, and optic neuritis with an incidence rate of 0.83 per 100,000 . The findings of this research indicate that traumatic/compressive optic neuropathy was the most common neuro-ophthalmic disease seen in individuals under the age of 50. Conversely, ischemic optic neuropathy, namely NAION, AION, and PION, was the most prevalent condition among patients aged 50 and older. This finding is consistent with a study conducted by Nattapong et al . in a tertiary hospital in Thailand. They identified the same prevalent disease among individuals aged 50 years and above. However, it contradicts the findings regarding the predominant disorder among individuals below 50 years old, as the study reported optic neuritis as the prevalent disorder in this age group . This research revealed that a significant proportion of patients who exhibited neuro-ophthalmic disorders had a BCVA of 6/6. This finding contradicts a study in Thailand, which showed that 40% of patients experiencing reduced vision were classified as blind in the afflicted eye . The etiology of blindness showed variability. While the incidence of non-arteritic anterior ischemic optic neuropathy was very high, it is worth noting that just one participant in the research mentioned above had complete vision loss . Multiple investigations have shown that NAION often manifests with a Snellen visual acuity superior to 6/60 . The present study revealed that the predominant medical comorbidities and patient-related factors were diabetes, hypertension, and smoking. These findings align with a previous study conducted by Lee et al . in South Korea, which reported that neuro-ophthalmic patients commonly presented with underlying diseases such as diabetes, hypertension, hyperlipidemia, stroke, myocardial infarction, sleep apnea, pulmonary embolism, and deep vein thrombosis . One of the notable strengths of this research is its prospective methodology for case collection, which allows for the examination of many aspects of the illnesses across different age groups and both genders. One of the limitations of this study is its single-center design. Another limitation is the interdisciplinary nature of certain neurological conditions, such as brain tumors, stroke that involves visual pathways, myasthenia gravis, hemifacial spasm, and blepharospasm, which require the collaboration of multiple medical specialties. Consequently, there is a risk of excluding possible cases that should be managed by neurologists or neurosurgeons. This limitation could contribute to a potential underreporting of the incidence rates of these conditions within the study parameters. The primary neuro-ophthalmic conditions identified in this study were ischemic optic neuropathy, sixth nerve palsy, traumatic/compressive optic neuropathy, and papilledema. The prevalence of neuro-ophthalmic illnesses varies since it is reliant upon the specific inclusion criteria used in each research. The incidence of neuro-ophthalmic illnesses is generally high. |
Safety and feasibility of deep brain stimulation of the anterior cingulate and thalamus in chronic refractory neuropathic pain: a pilot and randomized study | 390ffc66-f635-4c27-b664-3d346e426090 | 11834684 | Surgical Procedures, Operative[mh] | Moderate to severe neuropathic pain (NP) has a prevalence of 5,1% in the general population . Independently of pain intensity and duration, patients with NP report a huge impairment of quality of life and anxiety/depression scores, significantly higher than patients without pain or than patients suffering from pain without neuropathic component . Only 23% of neuropathic pain patients consulting in tertiary pain treatment centers respond to well-conducted medical treatments, including antidepressants, antiepileptics and opioids . These refractory NP patients , and especially central neuropathic pain , have a poor quality of life and no conventional therapeutic solution, justifying invasive approaches as deep brain stimulation (DBS). DBS has been proposed since the 1970s to treat refractory pain using two main targets: regions surrounding the third ventricle and aqueduct of Sylvius, including the grey matter (periventricular grey and periaqueductal grey) and sensory thalamus. Sensory thalamic DBS targeted the ventral posteromedial (VPM) and ventral posterolateral (VPL) nuclei. Sensory thalamic stimulation seems selectively effective to refractory NP and a recent meta-analysis showed VPL demonstrated the largest effect among different brain targets, with significant heterogeneity observed . However, several studies, including controlled trials, reported partial, insufficient or short-lasting efficacy that prevent its common use in daily practice. Recently, the anterior and dorsal cingulate gyrus (ACC) has been proposed as a target for DBS for refractory pain . Functional brain imaging studies suggest that ACC plays a role in integration and modulation of the cognitive, emotional and affective components of pain . In particular, the ACC is thought to be involved in the process of attributing unpleasantness or “suffering” to the experience of pain perception. Focal neurosurgical lesions of the ACC, namely cingulotomies, have been used to treat chronic pain, with success rates of about 50% . Chronic electrical stimulation of the ACC (DBS-ACC), a non-destructive and reversible technique, was proposed as an alternative to cingulotomy in few open studies or case reports . In a series of 22 patients suffering from refractory pain , ACC-DBS induced a reduction of pain intensity below a VAS score of 4/10 in one third of the patients. The overall health status (EQ-5D scale) and quality of life (SF-36 scale) improved significantly (respectively by 20% and 7%) after ACC-DBS at a mean follow-up of 13 months. Patients treated by cingulotomy or ACC-DBS reported a dissociation between the persistence of the usual pain perception and a certain indifference to pain linked to the loss of perception of its unpleasant aspect. This point and the dissociation between the significant improvement in quality of life and the lack of improvement in pain intensity suggests that DBS-ACC may modulate more the cognitive and emotional integration of pain rather than the pain itself , bringing new therapeutic hope to hopeless chronic refractory pain patients. However, ACC plays a functional role in other cognitive, motivational and affective functions . Damage to the ACC results in an overall decrease in interest, a decrease in motivation and activity leading to apathy . Cingulotomies’ main adverse effect was apathy, although its incidence in treated pain patients was unclear . These functions have not yet been studied in patients treated with chronic ACC- DBS. Moreover, DBS-induced indifference to pain may be associated with an emotional indifference (anhedonia), loss of motivation (apathy) or cognitive impairment, which may impact on patients’ daily lives. Moreover, the interest of combining ACC-DBS with thalamic-DBS remains to be clarified, as these two DBS approaches seem to have different mechanisms of action. Thalamic DBS is supposed to modulate the transmission of the nociceptive input and to inhibit the hyperactivity of deafferented thalamic neurons, in order to reduce pain intensity. ACC-DBS is supposed to modulate the emotional integration of pain, without modifying the intensity and the perception of pain. To answer to these questions, we conducted an exploratory study evaluating the feasibility and safety of combined ACC-DBS and thalamic-DBS, in patients with refractory NP for whom all conventional treatments have failed. This study was focused on the systematical assessment of the possible short-term and long-term cognitive, emotional and affective consequences of DBS. In order to evaluate the cognitive and affective impacts of ACC-DBS, the study also included a randomized phase during which DBS-ACC was alternatively active (“On”) or inactive (“Off”). Study design We conducted a bicentric prospective, feasibility and safety study to evaluate bilateral ACC-DBS combined with unilateral sensory thalamic-DBS in patients suffering from refractory unilateral NP. Study protocol has been previously published . Sensory thalamic and ACC-DBS devices were implanted under local anesthesia in a single stage surgery. During the first month after surgery (M0-M1), only sensory thalamic-DBS was activated (Fig. ). ACC-DBS was then activated one month after surgery (M1) and parameters settings were optimized during the next 3 months (M1-M4). Four months after surgery, all the patients were randomized in two 3 month-periods (separated by a 2-week wash-out period) organized in a cross-over design, comparing a DBS-ACC sequence on (“On”) and a DBS-ACC sequence off (“Off”). The patients and evaluating neurologists were blinded to the treatment periods and the ACC-stimulation parameters. This randomized period was followed by a 12-months open phase with ACC stimulation On. Patients Inclusion criteria were: adult patients (age 18–70 years old) suffering from chronic (duration > 1 year) unilateral NP (DN4 score ≥ 4/10), severe (VAS score ≥ 6/10 at 3 different evaluations during the year preceding inclusion), with high emotional impact (Hospital Anxiety and Depression scale sub-scores ≥ 10), considered as refractory to medication specific to neuropathic pain at sufficient doses and durations (including at least antiepileptics and antidepressants) and not sufficiently improved by rTMS or potentially relevant surgical solutions. Exclusion criteria were: cognitive impairment (MMSE score < 24), DSM-IV axis I psychiatric disorder, contra-indication to surgery, DBS, anesthesia or MRI. Technical aspects Details concerning the surgical technique and stimulation parameters for thalamic- and ACC-DBS have been previously published . One lead was implanted in the sensory thalamic nuclei contralateral to pain and two leads were implanted bilaterally and symmetrically in the ACC, and then connected to 2 generators. Sensory thalamic nuclei were targeted stereotactically based on the patient’s MRI and optimal position of the electrode was refined by intraoperative micro-electrode recordings and test stimulation to check that DBS-induced paresthesias were perceived in the painful body area. Stimulation intensity used for chronic stimulation was adapted to ensure that the stimulation-paresthesias were pleasant and felt in the painful region. The dorsal anterior cingulate was targeted on stereotactic MRI, according to the technique and location proposed by , approximately 20 mm posterior to the projection of the anterior tip of the frontal horn of the lateral ventricle. We chose to target the ACC bilaterally considering that, in chronic pain patients, ACC activity changes are bilateral and that previous successful therapeutic procedures targeting the ACC, namely cingulotomies and DBS were performed bilaterally. Stimulation of the ACC does not induce any perceptible feeling. The stimulation parameters were based on those used by . To avoid a “kindling” effect and the risk of epilepsy, the chronic stimulation was cyclic, alternating a 5-minute “On” phase and a 10-minute “Off” phase. The stimulation parameters were optimized, depending on the therapeutic or adverse effects observed, during the period between M1 and M4. The parameters found to be the most effective and best tolerated were used for the randomized phase. Endpoints Feasibility was evaluated by the proportion of patients undergoing with success the process of surgical intervention, chronic stimulation and evaluation without serious adverse events. Safety profile and efficacy were evaluated 1 month before surgery and 1, 4, 7, 10, 22 months after by independent assessments performed by a neurosurgeon, a neurologist specialized in pain medicine, a psychiatrist and a neuropsychologist, the last three being blind from the randomization. Safety was evaluated by repeated general and neurological examination, psychiatric assessment, assessment of cognitive and affective functioning. The cognitive assessment consisted in several tests: the mini mental status (MMSE) to evaluate global cognition, the French version of the Free and Cued Selective Reminding Test (FCSRT) to assess episodic memory, the Digit Span WAIS-IV subtest to assess working memory, the Digit Symbol-Coding WAIS-IV subtest to assess processing speed and the GREFEX battery to assess executive functions, including the Trail Making Test (TMT), the Stroop test, the 6 element test, the Brixton test, the double task test, the modified card sorting test (MCST) and verbal fluencies. Assessment of affective functions was performed using Hospital Anxiety and Depression (HAD) scale the Lille Apathy rating scale (LARS) , the revised version of “Reading the mind in the eyes” test to assess theory of mind and the Facial Expressions of Emotion– Stimuli and Tests (FEEST) test to assess emotion recognition. DBS efficacy was evaluated using pain intensity on Visual Analogic Scale (VAS), Brief Pain Inventory , the QDSA questionnaire (French version of the Short-Form McGill Pain Questionnaire) and quality of life improvement (EQ-5D-3 L health questionnaire) . Statistics To assess the effects of DBS on cognition we performed paired samples Student’s t-tests on each raw score comparing Baseline to every other time of the study (Post-op, Thalamus only, Thalamus and ACC, Long term). A “p-value” and an “adjusted p-value” were computed using the Benjamini and Hochberg False Discovery Rate to minimize the type I error rate. As we could not identify a pattern in the missing data, no imputing method was used. The effect of DBS on functioning of cognitive domains (episodic memory, executive functions, processing speed, working memory and social cognition) was assessed by grouping relevant standardized scores and computing their mean values. Lastly, we calculated the variation of these scores between baseline and every other time of measure. This new score was called “delta-z”. Descriptive statistics were then computed. Given the design of the study and following appropriate statistical practice, we used linear mixed models (LMMs) . We used linear mixed models with time as a fixed variable, subject as a random variable, and each cognitive variable to be predicted. We also modeled the effect of time and interindividual variability with a multivariate mixed linear model considering the mean standardized scores of the different cognitive domains (episodic memory, executive functions, processing speed, working memory and social cognition). When necessary for mixed linear modeling, mean imputation was performed. Analyses were performed using the software R Statistical Software (v4.2.2; R Core Team, 2022) and the following R packages: lme4 (v.1.1.33; ), mice (v.3 0.16.0; ), rempsyc (v.0.1.2 ) , tidyverse (v.2.0.0; ), zoo (v.1.8.12; ). Concerning the efficacy assessment, due to the small number of subjects in this study, statistical analysis was based on non- parametric tests. Results are presented as means (standard deviation [SD]) for quantitative variables. The scores comparisons between each visit and baseline were performed using the Wilcoxon signed- rank test. Alpha risk was set to 5% (α = 0.05). Statistical analysis was performed with EasyMedStat (version 3.27; www.easymedstat.com ). We conducted a bicentric prospective, feasibility and safety study to evaluate bilateral ACC-DBS combined with unilateral sensory thalamic-DBS in patients suffering from refractory unilateral NP. Study protocol has been previously published . Sensory thalamic and ACC-DBS devices were implanted under local anesthesia in a single stage surgery. During the first month after surgery (M0-M1), only sensory thalamic-DBS was activated (Fig. ). ACC-DBS was then activated one month after surgery (M1) and parameters settings were optimized during the next 3 months (M1-M4). Four months after surgery, all the patients were randomized in two 3 month-periods (separated by a 2-week wash-out period) organized in a cross-over design, comparing a DBS-ACC sequence on (“On”) and a DBS-ACC sequence off (“Off”). The patients and evaluating neurologists were blinded to the treatment periods and the ACC-stimulation parameters. This randomized period was followed by a 12-months open phase with ACC stimulation On. Inclusion criteria were: adult patients (age 18–70 years old) suffering from chronic (duration > 1 year) unilateral NP (DN4 score ≥ 4/10), severe (VAS score ≥ 6/10 at 3 different evaluations during the year preceding inclusion), with high emotional impact (Hospital Anxiety and Depression scale sub-scores ≥ 10), considered as refractory to medication specific to neuropathic pain at sufficient doses and durations (including at least antiepileptics and antidepressants) and not sufficiently improved by rTMS or potentially relevant surgical solutions. Exclusion criteria were: cognitive impairment (MMSE score < 24), DSM-IV axis I psychiatric disorder, contra-indication to surgery, DBS, anesthesia or MRI. Details concerning the surgical technique and stimulation parameters for thalamic- and ACC-DBS have been previously published . One lead was implanted in the sensory thalamic nuclei contralateral to pain and two leads were implanted bilaterally and symmetrically in the ACC, and then connected to 2 generators. Sensory thalamic nuclei were targeted stereotactically based on the patient’s MRI and optimal position of the electrode was refined by intraoperative micro-electrode recordings and test stimulation to check that DBS-induced paresthesias were perceived in the painful body area. Stimulation intensity used for chronic stimulation was adapted to ensure that the stimulation-paresthesias were pleasant and felt in the painful region. The dorsal anterior cingulate was targeted on stereotactic MRI, according to the technique and location proposed by , approximately 20 mm posterior to the projection of the anterior tip of the frontal horn of the lateral ventricle. We chose to target the ACC bilaterally considering that, in chronic pain patients, ACC activity changes are bilateral and that previous successful therapeutic procedures targeting the ACC, namely cingulotomies and DBS were performed bilaterally. Stimulation of the ACC does not induce any perceptible feeling. The stimulation parameters were based on those used by . To avoid a “kindling” effect and the risk of epilepsy, the chronic stimulation was cyclic, alternating a 5-minute “On” phase and a 10-minute “Off” phase. The stimulation parameters were optimized, depending on the therapeutic or adverse effects observed, during the period between M1 and M4. The parameters found to be the most effective and best tolerated were used for the randomized phase. Feasibility was evaluated by the proportion of patients undergoing with success the process of surgical intervention, chronic stimulation and evaluation without serious adverse events. Safety profile and efficacy were evaluated 1 month before surgery and 1, 4, 7, 10, 22 months after by independent assessments performed by a neurosurgeon, a neurologist specialized in pain medicine, a psychiatrist and a neuropsychologist, the last three being blind from the randomization. Safety was evaluated by repeated general and neurological examination, psychiatric assessment, assessment of cognitive and affective functioning. The cognitive assessment consisted in several tests: the mini mental status (MMSE) to evaluate global cognition, the French version of the Free and Cued Selective Reminding Test (FCSRT) to assess episodic memory, the Digit Span WAIS-IV subtest to assess working memory, the Digit Symbol-Coding WAIS-IV subtest to assess processing speed and the GREFEX battery to assess executive functions, including the Trail Making Test (TMT), the Stroop test, the 6 element test, the Brixton test, the double task test, the modified card sorting test (MCST) and verbal fluencies. Assessment of affective functions was performed using Hospital Anxiety and Depression (HAD) scale the Lille Apathy rating scale (LARS) , the revised version of “Reading the mind in the eyes” test to assess theory of mind and the Facial Expressions of Emotion– Stimuli and Tests (FEEST) test to assess emotion recognition. DBS efficacy was evaluated using pain intensity on Visual Analogic Scale (VAS), Brief Pain Inventory , the QDSA questionnaire (French version of the Short-Form McGill Pain Questionnaire) and quality of life improvement (EQ-5D-3 L health questionnaire) . To assess the effects of DBS on cognition we performed paired samples Student’s t-tests on each raw score comparing Baseline to every other time of the study (Post-op, Thalamus only, Thalamus and ACC, Long term). A “p-value” and an “adjusted p-value” were computed using the Benjamini and Hochberg False Discovery Rate to minimize the type I error rate. As we could not identify a pattern in the missing data, no imputing method was used. The effect of DBS on functioning of cognitive domains (episodic memory, executive functions, processing speed, working memory and social cognition) was assessed by grouping relevant standardized scores and computing their mean values. Lastly, we calculated the variation of these scores between baseline and every other time of measure. This new score was called “delta-z”. Descriptive statistics were then computed. Given the design of the study and following appropriate statistical practice, we used linear mixed models (LMMs) . We used linear mixed models with time as a fixed variable, subject as a random variable, and each cognitive variable to be predicted. We also modeled the effect of time and interindividual variability with a multivariate mixed linear model considering the mean standardized scores of the different cognitive domains (episodic memory, executive functions, processing speed, working memory and social cognition). When necessary for mixed linear modeling, mean imputation was performed. Analyses were performed using the software R Statistical Software (v4.2.2; R Core Team, 2022) and the following R packages: lme4 (v.1.1.33; ), mice (v.3 0.16.0; ), rempsyc (v.0.1.2 ) , tidyverse (v.2.0.0; ), zoo (v.1.8.12; ). Concerning the efficacy assessment, due to the small number of subjects in this study, statistical analysis was based on non- parametric tests. Results are presented as means (standard deviation [SD]) for quantitative variables. The scores comparisons between each visit and baseline were performed using the Wilcoxon signed- rank test. Alpha risk was set to 5% (α = 0.05). Statistical analysis was performed with EasyMedStat (version 3.27; www.easymedstat.com ). Patients Eight patients were included in the study. Patients’ characteristics are detailed in Table . There were 6 men and 2 women; mean age was 52,1 years old (range 42–69). Five patients suffered from central neuropathic pain and 3 from peripheral neuropathic pain. Mean pain duration before surgery was 7,1 years (range 2,5–25). Safety All the patients completed the study. Only one patient refused to answer the neuropsychological assessment at the end of the study. Postoperative imaging confirmed the placement of the leads in the thalamus and ACC (Fig. ). Thalamic stimulation intensity, pulse width and frequency were adapted according to the paresthesias perceived by the patients and varied between 0,4 − 2,5 mA, 120–150 microsecondes and 20–130 Hz, respectively. Due to the lack of efficacy and unpleasant DBS-induced paresthesias, 6 out of 8 patients demanded to stop the thalamic stimulation after 4 months of stimulation. ACC stimulation intensity and pulse width were adapted according to the safety profile and efficacy and varied between 2 and 3,5 mA and 60–450 microsecondes, respectively. Adverse events are detailed in Table . Surgery-related complications consisted in one intraoperative epileptic seizure needing to abort the surgery. The postoperative imaging did not show any complication. The patient recovered without neurological impairment and the surgery was postponed and completed one month later. Most of the adverse events were observed during the ACC stimulation settings optimization period. Several patients presented transient motor or attention disturbances that recovered without sequelae when the ACC stimulation intensity was decreased (Table ). Two patients displayed persistent adverse effects: one patient complained of gait and balance disturbances, probably related to thalamic stimulation; and one patient complained of sleep disturbances, likely related to ACC stimulation. No patient developed permanent epilepsy. No patient displayed significant changes in cognitive and affective assessment (Table ; Fig. ). Paired t-tests on raw scores and delta-z scores did not change significantly neither between each time of measure compared baseline, nor between On and Off ACC-DBS periods. Linear mixed models confirmed the absence of cognitive and affective worsening over time (data not shown). Psychiatric clinical evaluation revealed no DBS-related impairment of emotional and affective functioning. Efficacy Mean VAS pain intensity did not change significantly according to stimulation periods (Table ; Fig. ). However, we observed a significant improvement of the EQ-5D utility index at the end of the ACC ON stimulation period ( p = 0,0039) and at the end of the study ( p = 0,0034), compared to baseline. EQ-5D VAS score tended to improve during the same periods, but without statistical significance. No endpoint varied significantly between the On and Off ACC stimulation periods. The affective pain rating index of the QDSA (French version of the McGill Pain Questionnaire) significantly improved between baseline and the end of the study, although the sensory pain rating index of the QDSA did not change significantly. At the end of the study, 4 patients estimated to be improved or very improved compared to baseline, 1 was slightly improved, 1 reported no change and 2 considered that they worsened compared to baseline. Eight patients were included in the study. Patients’ characteristics are detailed in Table . There were 6 men and 2 women; mean age was 52,1 years old (range 42–69). Five patients suffered from central neuropathic pain and 3 from peripheral neuropathic pain. Mean pain duration before surgery was 7,1 years (range 2,5–25). All the patients completed the study. Only one patient refused to answer the neuropsychological assessment at the end of the study. Postoperative imaging confirmed the placement of the leads in the thalamus and ACC (Fig. ). Thalamic stimulation intensity, pulse width and frequency were adapted according to the paresthesias perceived by the patients and varied between 0,4 − 2,5 mA, 120–150 microsecondes and 20–130 Hz, respectively. Due to the lack of efficacy and unpleasant DBS-induced paresthesias, 6 out of 8 patients demanded to stop the thalamic stimulation after 4 months of stimulation. ACC stimulation intensity and pulse width were adapted according to the safety profile and efficacy and varied between 2 and 3,5 mA and 60–450 microsecondes, respectively. Adverse events are detailed in Table . Surgery-related complications consisted in one intraoperative epileptic seizure needing to abort the surgery. The postoperative imaging did not show any complication. The patient recovered without neurological impairment and the surgery was postponed and completed one month later. Most of the adverse events were observed during the ACC stimulation settings optimization period. Several patients presented transient motor or attention disturbances that recovered without sequelae when the ACC stimulation intensity was decreased (Table ). Two patients displayed persistent adverse effects: one patient complained of gait and balance disturbances, probably related to thalamic stimulation; and one patient complained of sleep disturbances, likely related to ACC stimulation. No patient developed permanent epilepsy. No patient displayed significant changes in cognitive and affective assessment (Table ; Fig. ). Paired t-tests on raw scores and delta-z scores did not change significantly neither between each time of measure compared baseline, nor between On and Off ACC-DBS periods. Linear mixed models confirmed the absence of cognitive and affective worsening over time (data not shown). Psychiatric clinical evaluation revealed no DBS-related impairment of emotional and affective functioning. Mean VAS pain intensity did not change significantly according to stimulation periods (Table ; Fig. ). However, we observed a significant improvement of the EQ-5D utility index at the end of the ACC ON stimulation period ( p = 0,0039) and at the end of the study ( p = 0,0034), compared to baseline. EQ-5D VAS score tended to improve during the same periods, but without statistical significance. No endpoint varied significantly between the On and Off ACC stimulation periods. The affective pain rating index of the QDSA (French version of the McGill Pain Questionnaire) significantly improved between baseline and the end of the study, although the sensory pain rating index of the QDSA did not change significantly. At the end of the study, 4 patients estimated to be improved or very improved compared to baseline, 1 was slightly improved, 1 reported no change and 2 considered that they worsened compared to baseline. Our study suggests that ACC-DBS is relatively safe, as it did not induce cognitive or affective side effects. Surgery-related complications were concordant with those usually observed in DBS procedures for movement disorders. Our initial objective was to evaluate combined ACC and thalamic DBS. However, due to the lack of efficacy and poor tolerance of thalamic-DBS in our patients, most of them demanded to discontinue thalamic stimulation after a few months. These results differed from studies reporting significant improvement of neuropathic pain by thalamic stimulation . Several factors might have contributed to poor efficacy in our study. Most of our patients displayed central neuropathic pain, known to be less responsive to thalamic DBS than peripheral neuropathic pain . In a recent international multicenter study, only 36% of patients suffering from central post-stroke pain did respond to thalamic DBS . Thalamic DBS may be ineffective in central neuropathic pain in case of thalamic destruction; however only one patient (C1P2) in our study had a lesion involving the thalamus. We did not perform an external stimulation trial before complete implantation to select only responders. However, the efficacy of thalamic DBS has been questioned by two randomized studies and is still a matter of debate. Due to this early thalamic stimulation discontinuation, the safety of combined sensory thalamic and cingulate stimulation could be assessed during 3 months only, but we were able to assess the long-term safety of ACC DBS. In previous studies no ACC DBS- specific complications or side effects were reported, except long term epilepsy . None of our patients developed chronic epilepsy. This might be explained by our shorter follow-up and by different stimulation parameters, especially lower stimulation intensities (2,5 mA maximum in our study compared to 4.5–5 V in previous studies) and cyclic stimulation mode, alternating “On” and “Off” periods to avoid a kindling effect that might favor the development of chronic epilepsy. However, some of our patients displayed transient abnormal motor behaviors, occurring during the ACC stimulation setting period and likely related to excessive stimulation intensity, that were similar to abnormal behaviors induced by anterior cingulate stimulation in epileptic patients explored by stereo-encephalography . We cannot determine whether these transient motor behaviors were focal epileptic seizures or not. No previous study systematically assessed the potential cognitive and affective consequences of anterior cingulate DBS. We conducted a comprehensive assessment of cognitive and affective functions and detected no significant change over a period of more than one year. This is an important point as this dorsal anterior cingulate area, also called anterior mid-cingulate cortex, is involved in multiple essentials functions, including attention, cognitive control, memory, learning, decision making, social cognition, reward, emotion, negative affects and pain . The absence of adverse effects allows us to consider the further use of anterior cingulum stimulation, provided that it is effective. Efficacy of ACC DBS has been evaluated only in short cases series . Most of these studies reported a mild or non-significant decrease of mean pain intensity, contrasting with a significant improvement in patients’ quality of life. We observed similar outcomes when comparing the preoperative, baseline pain intensity and quality of life scores with those recorded at the end of the “On” ACC-DBS period and at the end of the study. Those results suggest that ACC-DBS may influence patient’s perception of their own quality of life or health status, independently of pain intensity changes. In addition, considering the changes observed on the McGill Pain Questionnaire, ACC DBS was more efficient on the affective component than on the sensory component of chronic pain. On the other hand, we did not observe significant changes of the HAD depression sub-scores, indicating that the quality-of-life improvement was not related to mood improvement. Changes in ACC activity can be observed in chronic pain patients who experience pain relief, whatever the treatment, including surgery, medication or even placebo . Altogether, these results suggest that ACC-DBS may modulate the affective component of pain and/or emotional perception of pain, leading to an improvement in quality of life. Recently, Lempka et al. reported that DBS of the ventral striatum / anterior limb of the internal capsule (VS/ALIC), a region targeted by DBS to treat major depressive disorder, improved depressive aspect of patients suffering from chronic pain, but without decreasing pain intensity. VS/ALIC-DBS and ACC-DBS share a common strategy, namely to target the affective component or affective consequences of chronic pain. This strategy could prove more feasible and relevant than targeting the pain intensity itself, in patients suffering from chronic refractory neuropathic pain. Despite its encouraging results, our study suffered from several limitations. The study lacked adequate statistical power to detect an eventual significant change between ACC-DBS “On” and “Off” conditions, due to the low and insufficient number of patients. These chronic refractory pain patients are usually complex to evaluate, to manage and to treat. VAS score is insufficient to reflect this complexity and the burden of chronic pain. In these patients, pain relief does not necessarily translate into a major change in the VAS score. More relevant and specific endpoints are needed. ACC-DBS efficacy differed across patients and only half of them reported an improvement potentially related to ACC-DBS. Predictors of efficacy are needed to better select future responders. However the safety profile of ACC-DBS allows to study its efficacy in larger controlled studies which are still mandatory. This pilot study confirmed the safety of anterior cingulate DBS alone or in combination with thalamic stimulation and suggested that it might improve quality of life of patients with chronic refractory pain. |
Multivariable prognostic modelling to improve prediction of colorectal cancer recurrence: the PROSPeCT trial | 050ed4bd-6f16-4cd1-adb8-272a029704b1 | 11519198 | Anatomy[mh] | Up to 50% of patients with colorectal cancer ultimately die from metastatic disease, occult at diagnosis . Adjuvant chemotherapy following surgery aims to eradicate micrometastases but offering this indiscriminately may be overtreatment. Streamlining patients who should receive adjuvant therapy turns on prognosis, based largely on pathological tumour and nodal stage . However, patients with identical stage tumours can experience widely divergent survival outcomes: 5-year survival varies between 63–87% for American Joint Committee on Cancer (AJCC, tumour-node-metastasis (TNM) stage grouping) stage II; and stage IIIA survival may exceed stage IIB/IIC . Also, the shift towards neoadjuvant therapy for colon as well as rectal cancer has highlighted a need for better preoperative identification of high-risk patients . Multivariable prognostic models combine multiple factors to estimate the risk of future outcome(s). While models predicting colorectal cancer outcomes are available in different clinical settings , they are not used widely. A criticism has been that they do not include promising predictors despite recent research around imaging, genetic, and immunohistochemical biomarkers. It was hypothesised that a baseline multivariable model to predict the recurrence of colorectal cancer could be improved by the addition of more novel, promising imaging, genetic, and pathological markers of angiogenesis and hypoxia. To achieve this, a prospective multicentre trial was designed specifically to develop a prognostic model of disease-free survival. The aim was to investigate promising CT perfusion imaging and genetic and immunohistochemical markers to improve the prediction of colorectal cancer recurrence. Study design and participants PROSPeCT (Improving PRediction Of metaStatic disease in Primary coloreCTal cancer) was a prospective, multicentre, cohort trial (ISRCTN: 95037515; REC: 10/H0713/84), conducted according to the principles of good clinical practice, and run by a clinical trials unit. Independent oversight was provided by the Data Monitoring and Trial Steering Committees. Research is reported according to transparent reporting of a multivariable prediction model for individual prognosis or diagnosis guidelines . Consecutive adult participants were recruited from 13 university and community hospitals between November 2011 and 2016. Eligible patients had histologically proven or suspected (endoscopy and/or imaging) primary colorectal cancer. Participants were identified via outpatient clinics, imaging requests, endoscopy lists, and tumour board meetings. Exclusions were polyp cancers; metastases at staging; contraindication to intravenous contrast agent; an invisible tumour on CT; pregnancy; concurrent cancer, and a final non-cancer diagnosis. All participants provided written informed consent. CT imaging procedures Participating centres underwent training and quality control for data acquisition . In addition to a staging CT, participants underwent CT perfusion of the primary tumour. This was performed on the same occasion. The CT perfusion dynamic acquisition commenced 5 s following intravenous contrast injection (> 300 mg/mL iodine; 50 mLs at 5 mL/s followed by a saline chaser), with images at 1.5-s intervals for 45 s, then at 15-s intervals for 75 s. This was followed by the contrast-enhanced staging CT which was performed according to the institutional standard protocol (Supplementary Table , CT acquisition parameters). CT perfusion scans were analysed by 25 designated local radiologists (with ≥ 5 years of subspecialty experience), after central training for software familiarisation and analysis. All had a subspecialty interest in gastrointestinal imaging. Radiologists used commercially available software provided by their CT vendor. Kinetic models included the distributed parameter model; Patlak analysis; deconvolution; and maximum slope. Using the corresponding software, radiologists defined the arterial input function; defined when the contrast the first pass had ended; and outlined the tumour contour. This was achieved by placing a fixed-size (10 mm 2 ) circular region-of-interest (ROI) in the largest visualised artery; marking the time-point on the displayed attenuation-time curve when the lower inflection point of the curve was reached; and outlining the tumour contour using a free-hand ROI, encompassing the largest tumour area possible but taking care to avoid non-tumoural tissue (area ranging from 9.5 mm 2 to 2981 mm 2 ). This generated the following vascular parameters: regional blood flow, blood volume, mean transit time, or permeability surface area product (dependent on vendor software). Perfusion variables were recorded on a case report form that also detailed tumour dimensions and location. CT TNM stage was also determined. Following imaging data transfer, CT analysis was repeated centrally by three radiologists with 5–18 years of experience in CT perfusion, using the same software used locally, unaware of prior measurements and outcomes. Pathology procedures For patients undergoing surgery, pathological staging was performed by pathologists at the participating institutions. Tumour staging was based on the fifth edition of the AJCC TNM staging classification as defined in the trial protocol and recorded on a case report form. Formalin-fixed paraffin-embedded blocks were also transferred centrally for additional analysis by two subspecialty pathologists who assessed: DNA mismatch repair (MMR) protein status (via expression of MLH1, MSH2, MSH6, and PMS2); CD105 microvessel density; vascular endothelial growth factor (VEGF) expression; glucose transporter protein (GLUT-1) expression; hypoxia-inducible factor-α (HIF-1α) expression. Tissue sections were batch-stained (Bond-Rx m , Leica Biosystems; Bond Polymer Refine Detection), scanned at ×20 magnification (Hamamatsu Nanozoomer 2.0 RS), and displayed on an LCD monitor with standardised contrast, focus, saturation, and white balance standardisation. VEGF, Glut-1, and HIF-1α were scored on staining intensity and proportion of positive cells according to previously published systems: VEGF and Glut-1 expression was calculated by combining staining intensity (0–3) with the percentage of positive cells (0–4), and HIF-1α expression on combined cytoplasmic and nuclear staining (range, 0–6). Visiopharm software evaluated CD105 staining. DNA was extracted for somatic mutation analysis (KRAS, BRAF, PIK3CA, pTEN, APC, and HRAS), and quality and quantification were assessed (Agilent Tapestation 2200). Preparation and sequencing used Life Technologies Ion Torrent, and analysed using Integrative Genomics Viewer. Clinical management decisions and follow-up Standard clinical, radiological, and pathological investigations were interpreted and discussed at the tumour board meeting at each participating institution and treatment decisions were undertaken as per usual clinical practice. For the primary outcome, participants were followed for 36 months (or death if sooner) and findings from outpatient visits, surveillance and/or symptomatic CT, carcinoembryonic antigen, and any other relevant investigations were recorded. Data collation and outcomes The clinical trials unit collated and entered data into a bespoke database, and missing fields or possible inaccuracies were queried. Baseline data included participant demographics, date and results of staging investigations, and stage and planned management determined at the tumour board meeting. The date of any recurrence or death was recorded. Recurrence was considered alongside histology from any further resections or biopsies. Assessment of recurrence was blinded to genetic and immunochemistry results, and to principal component weighting (PCA) for CT perfusion variables. Statistical analysis The primary outcome was to improve the prediction of recurrence or death by developing a model of disease-free survival, superior to current practice. A recurrence event was defined as metastasis, local recurrence/new primary, and/or any death (recorded as the primary event in patients with other simultaneous events). Outcomes were based on Nelson-Aalen cumulative hazard estimates of pre-specified risk groups at 3 years, using time-to-event models. Predictions by risk group were compared via (i) differences in sensitivity and specificity and (ii) a hypothetical population of 1000 participants diagnosed with colorectal cancer, to compare different models. Modelling strategy: a best “baseline” model (Model A) was developed from prespecified standard clinical and pathological variables, namely TN stage, age, sex, tumour location and size, EMVI, and planned treatment. Univariable significance was not used to select variables. In order to determine the benefit (if any) of promising biomarkers, these were added to the standard model to create new models as follows: Model B (local CT perfusion variables via composite principal components analysis (PCA) score); Model D (simplest single local CT perfusion variable); Model E (central CT perfusion variables via PCA score); and Model F (pathology variables: immunohistochemical angiogenesis and hypoxia markers plus somatic mutations). Prediction of all models was compared to standard TN staging (rule C; “clinical rule”), with “high risk” patients defined by stage III AJCC stage grouping and “low risk” patients defined by stage I/II . In order to mirror model usage in clinical practice, imaging staging was used in the standard model for patients receiving neoadjuvant therapy or in whom surgery was not planned; imaging staging is deemed accurate when compared with pathological staging . The pathological stage was used in patients having surgery first. Model methods: The standard model was a Wiebull parametric (STATA “stpm2”). Risk groups were pre-specified based on tertile groups for each model; i.e. high risk = top tertile; medium risk = mid tertile; low risk = bottom tertile. Model performance was presented using Kaplan–Meier plots of risk groups (high vs medium/low risk, and high/medium vs low), with 95% confidence intervals (CI) and risk tables. Standard measures of discrimination and calibration were also calculated, including c-index and calibration slope. Internal validation using bootstrapping (100 repeats) was used to assess over-optimism. Additional details regarding sample size, powering, and prognostic modelling are presented in the Supplementary material. PROSPeCT (Improving PRediction Of metaStatic disease in Primary coloreCTal cancer) was a prospective, multicentre, cohort trial (ISRCTN: 95037515; REC: 10/H0713/84), conducted according to the principles of good clinical practice, and run by a clinical trials unit. Independent oversight was provided by the Data Monitoring and Trial Steering Committees. Research is reported according to transparent reporting of a multivariable prediction model for individual prognosis or diagnosis guidelines . Consecutive adult participants were recruited from 13 university and community hospitals between November 2011 and 2016. Eligible patients had histologically proven or suspected (endoscopy and/or imaging) primary colorectal cancer. Participants were identified via outpatient clinics, imaging requests, endoscopy lists, and tumour board meetings. Exclusions were polyp cancers; metastases at staging; contraindication to intravenous contrast agent; an invisible tumour on CT; pregnancy; concurrent cancer, and a final non-cancer diagnosis. All participants provided written informed consent. Participating centres underwent training and quality control for data acquisition . In addition to a staging CT, participants underwent CT perfusion of the primary tumour. This was performed on the same occasion. The CT perfusion dynamic acquisition commenced 5 s following intravenous contrast injection (> 300 mg/mL iodine; 50 mLs at 5 mL/s followed by a saline chaser), with images at 1.5-s intervals for 45 s, then at 15-s intervals for 75 s. This was followed by the contrast-enhanced staging CT which was performed according to the institutional standard protocol (Supplementary Table , CT acquisition parameters). CT perfusion scans were analysed by 25 designated local radiologists (with ≥ 5 years of subspecialty experience), after central training for software familiarisation and analysis. All had a subspecialty interest in gastrointestinal imaging. Radiologists used commercially available software provided by their CT vendor. Kinetic models included the distributed parameter model; Patlak analysis; deconvolution; and maximum slope. Using the corresponding software, radiologists defined the arterial input function; defined when the contrast the first pass had ended; and outlined the tumour contour. This was achieved by placing a fixed-size (10 mm 2 ) circular region-of-interest (ROI) in the largest visualised artery; marking the time-point on the displayed attenuation-time curve when the lower inflection point of the curve was reached; and outlining the tumour contour using a free-hand ROI, encompassing the largest tumour area possible but taking care to avoid non-tumoural tissue (area ranging from 9.5 mm 2 to 2981 mm 2 ). This generated the following vascular parameters: regional blood flow, blood volume, mean transit time, or permeability surface area product (dependent on vendor software). Perfusion variables were recorded on a case report form that also detailed tumour dimensions and location. CT TNM stage was also determined. Following imaging data transfer, CT analysis was repeated centrally by three radiologists with 5–18 years of experience in CT perfusion, using the same software used locally, unaware of prior measurements and outcomes. For patients undergoing surgery, pathological staging was performed by pathologists at the participating institutions. Tumour staging was based on the fifth edition of the AJCC TNM staging classification as defined in the trial protocol and recorded on a case report form. Formalin-fixed paraffin-embedded blocks were also transferred centrally for additional analysis by two subspecialty pathologists who assessed: DNA mismatch repair (MMR) protein status (via expression of MLH1, MSH2, MSH6, and PMS2); CD105 microvessel density; vascular endothelial growth factor (VEGF) expression; glucose transporter protein (GLUT-1) expression; hypoxia-inducible factor-α (HIF-1α) expression. Tissue sections were batch-stained (Bond-Rx m , Leica Biosystems; Bond Polymer Refine Detection), scanned at ×20 magnification (Hamamatsu Nanozoomer 2.0 RS), and displayed on an LCD monitor with standardised contrast, focus, saturation, and white balance standardisation. VEGF, Glut-1, and HIF-1α were scored on staining intensity and proportion of positive cells according to previously published systems: VEGF and Glut-1 expression was calculated by combining staining intensity (0–3) with the percentage of positive cells (0–4), and HIF-1α expression on combined cytoplasmic and nuclear staining (range, 0–6). Visiopharm software evaluated CD105 staining. DNA was extracted for somatic mutation analysis (KRAS, BRAF, PIK3CA, pTEN, APC, and HRAS), and quality and quantification were assessed (Agilent Tapestation 2200). Preparation and sequencing used Life Technologies Ion Torrent, and analysed using Integrative Genomics Viewer. Standard clinical, radiological, and pathological investigations were interpreted and discussed at the tumour board meeting at each participating institution and treatment decisions were undertaken as per usual clinical practice. For the primary outcome, participants were followed for 36 months (or death if sooner) and findings from outpatient visits, surveillance and/or symptomatic CT, carcinoembryonic antigen, and any other relevant investigations were recorded. The clinical trials unit collated and entered data into a bespoke database, and missing fields or possible inaccuracies were queried. Baseline data included participant demographics, date and results of staging investigations, and stage and planned management determined at the tumour board meeting. The date of any recurrence or death was recorded. Recurrence was considered alongside histology from any further resections or biopsies. Assessment of recurrence was blinded to genetic and immunochemistry results, and to principal component weighting (PCA) for CT perfusion variables. The primary outcome was to improve the prediction of recurrence or death by developing a model of disease-free survival, superior to current practice. A recurrence event was defined as metastasis, local recurrence/new primary, and/or any death (recorded as the primary event in patients with other simultaneous events). Outcomes were based on Nelson-Aalen cumulative hazard estimates of pre-specified risk groups at 3 years, using time-to-event models. Predictions by risk group were compared via (i) differences in sensitivity and specificity and (ii) a hypothetical population of 1000 participants diagnosed with colorectal cancer, to compare different models. Modelling strategy: a best “baseline” model (Model A) was developed from prespecified standard clinical and pathological variables, namely TN stage, age, sex, tumour location and size, EMVI, and planned treatment. Univariable significance was not used to select variables. In order to determine the benefit (if any) of promising biomarkers, these were added to the standard model to create new models as follows: Model B (local CT perfusion variables via composite principal components analysis (PCA) score); Model D (simplest single local CT perfusion variable); Model E (central CT perfusion variables via PCA score); and Model F (pathology variables: immunohistochemical angiogenesis and hypoxia markers plus somatic mutations). Prediction of all models was compared to standard TN staging (rule C; “clinical rule”), with “high risk” patients defined by stage III AJCC stage grouping and “low risk” patients defined by stage I/II . In order to mirror model usage in clinical practice, imaging staging was used in the standard model for patients receiving neoadjuvant therapy or in whom surgery was not planned; imaging staging is deemed accurate when compared with pathological staging . The pathological stage was used in patients having surgery first. Model methods: The standard model was a Wiebull parametric (STATA “stpm2”). Risk groups were pre-specified based on tertile groups for each model; i.e. high risk = top tertile; medium risk = mid tertile; low risk = bottom tertile. Model performance was presented using Kaplan–Meier plots of risk groups (high vs medium/low risk, and high/medium vs low), with 95% confidence intervals (CI) and risk tables. Standard measures of discrimination and calibration were also calculated, including c-index and calibration slope. Internal validation using bootstrapping (100 repeats) was used to assess over-optimism. Additional details regarding sample size, powering, and prognostic modelling are presented in the Supplementary material. Participants The participant flowchart is shown in Fig. . Baseline participant and tumour characteristics are shown in Table . Of 448 participants who were recruited, 122 (27%) were withdrawn, leaving 326 participants in the final cohort (226 male, 100 female; mean ± SD age 66 ± 10.7 years. 143/326 (44%) had colon and 183/326 (56%) rectal cancer (including rectal cancers extending into the rectosigmoid region). Surgery was performed ultimately in 308/326 (94%), of whom 92/308 (30%) had adjuvant therapy, and 67/308 (22%) had neoadjuvant therapy. Following neoadjuvant treatment, there were 12/183 (7%) rectal cancer complete responders; 5/12 received no further treatment. There was no therapy information for two participants. Imaging staging was used in 83 (26%) and pathological staging in 241 cases (74%) for modelling. Most cancers were locally advanced (227/326 ≥ T3, 70%); 151/326 (46%) were node-positive (≥ N1 stage, Table ). 93/326 (29%) had a venous invasion. The resection margin was positive in 15 (6%) of 252 with recorded data. Ultimately, there were 81 events over 3 years: 31 (39%) in year 1; 29 (36%) in year 2; and 21 (25%) in year 3. Fifty-two (64%) developed metastasis. Twelve (14%) developed new primaries. Seventeen (22%) died. There was venous invasion in a higher proportion of participants with recurrence (36/81, 44%) than without (57/245, 23%), with a significant relationship in both univariable and multivariable analysis with standard clinical variables (Supplementary material). CT perfusion analysis CT perfusion measurements from participating sites showed no apparent difference for participants with and without recurrence at local and central review (Supplementary Table ). Immunohistochemical and somatic mutation analysis Immunohistochemical and somatic mutation analysis split by participants with and without recurrence are shown in Supplementary Tables and . Distributions of HIF-1 α, VEGF, and GLUT-1 scores were similar across both groups (Supplementary Table ). Participants with KRAS wild type had the largest difference in the proportion of participants with recurrence (34/62, 55%) than without (96/208, 46%) (Supplementary Table ). Univariable and multivariable hazard ratios showed that genetic and immunohistochemistry variables were not associated with recurrence for all variables included in modelling (Supplementary Tables – ). Prognostic modelling Sensitivity and specificity for standard AJCC TNM staging for predicting recurrence were 0.56 (95% CI: 0.44, 0.67) and 0.58 (95% CI: 0.51,0.64), respectively (Fig. ). The equation for the best model of clinicopathological variables (TN stage, sex, age, tumour location and size, EMVI, and treatment; Model A) is presented in Supplementary material. This model was used at two operating points: at “high” vs “medium/low” risk, specificity improved over staging alone to 0.74 (95% CI: 0.68, 0.79) but with equivalent sensitivity of 0.57 (95% CI: 0.45, 0.68). At “high/medium” vs “low” risk, sensitivity over staging improved to 0.89 (95% CI: 0.80, 0.95) but with diminished specificity of 0.40 (95% CI: 0.31, 0.47). The addition of CT perfusion to the baseline clinicopathological model (Model A) did not improve prediction substantially over and above this model alone (Fig. , Model B–E). For example, for Model B (i.e. Model A + local CT perfusion variables) sensitivity and specificity at the “high” vs “medium/low” risk threshold was 0.58 (95% CI: 0.46, 0.70) and specificity 0.75 (95% CI: 0.68, 0.81). The addition of genetic and immunohistochemical markers to the baseline clinicopathological model (Model A) also did not improve prediction substantially over and above the standard model (Fig. , Models F1–F3). For example, for Model F3 (i.e. Model A + all pathology variables), sensitivity and specificity at the “high” vs “medium/low” risk threshold was 0.68 (95% CI: 0.53, 0.81) and specificity 0.76 (95% CI: 0.68, 0.82). Kaplan–Meier curves for the predictive performance of the baseline model A and other selected models are shown in Fig. . Kaplan–Meier curves for the predictive performance of standard staging are shown in Supplementary Fig. . Table summarises prediction measures of discrimination and calibration for all models. Ultimately, the addition of the previously published promising markers failed to improve the prediction of the clinicopathological model meaningfully. The participant flowchart is shown in Fig. . Baseline participant and tumour characteristics are shown in Table . Of 448 participants who were recruited, 122 (27%) were withdrawn, leaving 326 participants in the final cohort (226 male, 100 female; mean ± SD age 66 ± 10.7 years. 143/326 (44%) had colon and 183/326 (56%) rectal cancer (including rectal cancers extending into the rectosigmoid region). Surgery was performed ultimately in 308/326 (94%), of whom 92/308 (30%) had adjuvant therapy, and 67/308 (22%) had neoadjuvant therapy. Following neoadjuvant treatment, there were 12/183 (7%) rectal cancer complete responders; 5/12 received no further treatment. There was no therapy information for two participants. Imaging staging was used in 83 (26%) and pathological staging in 241 cases (74%) for modelling. Most cancers were locally advanced (227/326 ≥ T3, 70%); 151/326 (46%) were node-positive (≥ N1 stage, Table ). 93/326 (29%) had a venous invasion. The resection margin was positive in 15 (6%) of 252 with recorded data. Ultimately, there were 81 events over 3 years: 31 (39%) in year 1; 29 (36%) in year 2; and 21 (25%) in year 3. Fifty-two (64%) developed metastasis. Twelve (14%) developed new primaries. Seventeen (22%) died. There was venous invasion in a higher proportion of participants with recurrence (36/81, 44%) than without (57/245, 23%), with a significant relationship in both univariable and multivariable analysis with standard clinical variables (Supplementary material). CT perfusion measurements from participating sites showed no apparent difference for participants with and without recurrence at local and central review (Supplementary Table ). Immunohistochemical and somatic mutation analysis split by participants with and without recurrence are shown in Supplementary Tables and . Distributions of HIF-1 α, VEGF, and GLUT-1 scores were similar across both groups (Supplementary Table ). Participants with KRAS wild type had the largest difference in the proportion of participants with recurrence (34/62, 55%) than without (96/208, 46%) (Supplementary Table ). Univariable and multivariable hazard ratios showed that genetic and immunohistochemistry variables were not associated with recurrence for all variables included in modelling (Supplementary Tables – ). Sensitivity and specificity for standard AJCC TNM staging for predicting recurrence were 0.56 (95% CI: 0.44, 0.67) and 0.58 (95% CI: 0.51,0.64), respectively (Fig. ). The equation for the best model of clinicopathological variables (TN stage, sex, age, tumour location and size, EMVI, and treatment; Model A) is presented in Supplementary material. This model was used at two operating points: at “high” vs “medium/low” risk, specificity improved over staging alone to 0.74 (95% CI: 0.68, 0.79) but with equivalent sensitivity of 0.57 (95% CI: 0.45, 0.68). At “high/medium” vs “low” risk, sensitivity over staging improved to 0.89 (95% CI: 0.80, 0.95) but with diminished specificity of 0.40 (95% CI: 0.31, 0.47). The addition of CT perfusion to the baseline clinicopathological model (Model A) did not improve prediction substantially over and above this model alone (Fig. , Model B–E). For example, for Model B (i.e. Model A + local CT perfusion variables) sensitivity and specificity at the “high” vs “medium/low” risk threshold was 0.58 (95% CI: 0.46, 0.70) and specificity 0.75 (95% CI: 0.68, 0.81). The addition of genetic and immunohistochemical markers to the baseline clinicopathological model (Model A) also did not improve prediction substantially over and above the standard model (Fig. , Models F1–F3). For example, for Model F3 (i.e. Model A + all pathology variables), sensitivity and specificity at the “high” vs “medium/low” risk threshold was 0.68 (95% CI: 0.53, 0.81) and specificity 0.76 (95% CI: 0.68, 0.82). Kaplan–Meier curves for the predictive performance of the baseline model A and other selected models are shown in Fig. . Kaplan–Meier curves for the predictive performance of standard staging are shown in Supplementary Fig. . Table summarises prediction measures of discrimination and calibration for all models. Ultimately, the addition of the previously published promising markers failed to improve the prediction of the clinicopathological model meaningfully. Prognostication in clinical practice is most commonly by AJCC staging , which combines tumour, nodal, and metastatic status. This is validated and widely accepted, but ignores additional potentially useful prognostic information . Multivariable prognostic models in healthcare combine multiple factors to estimate the risk of future outcome(s), such as recurrence or death, and aim to inform clinical decisions by facilitating personalised management . Models are typically developed using multivariable regression, which combines weighted predictors in an equation that estimates individual risk. Models previously proposed for colorectal cancer include Numeracy by Adjuvant! Online . Novel markers promise to improve prognostication and to personalise the treatment of cancer patients, but a challenge for biomarker research, including imaging, immunohistochemical and genetic biomarkers, is limited power, over-optimistic prediction, and lack of generalisability of data from an investigation of single markers, small samples, and lack of prospective multicentre evaluation. In this prospective multicentre trial, we verified that the sensitivity and specificity of TNM staging alone for the primary outcome (recurrence/death by 3 years), were limited at 0.56 and 0.58, respectively. In comparison to TNM staging, a clinicopathologic model including sex, age, tumour, and nodal stage, tumour location and size, vascular invasion and treatment improved specificity (0.74 vs 0.58) with equivalent sensitivity (0.57 vs 0.56) when used to identify high vs medium/low-risk participants. When used to identify high/medium vs low-risk patients, sensitivity was higher (0.89 vs 0.56), but with diminished specificity (0.40 vs 0.58). While this model was unable to simultaneously increase sensitivity and specificity substantially, it promises clinical utility by improving on prediction of recurrence compared to staging alone. Patients’ perspectives will influence which threshold to adopt; i.e. improved specificity to diminish overtreatment risk or improved sensitivity to diminish the chance of missing future recurrence. In order to assess the prognostic utility of novel biomarkers, statisticians advocate building a “baseline” standard model from predictors already considered clinically useful , rather than selecting from the study dataset by univariable significance (which encourages overfitting) . The benefit, if any, of promising biomarkers is then determined by whether their addition to the standard model improves prediction significantly, instead of continually re-fitting the entire model (which results in over-optimistic prediction) . To avoid constraints imposed by retrospective datasets, we used a prospective design to eliminate recruitment biases and acquired sufficient events (namely patients developing distant metastasis or death). Evaluating multiple pre-specified predictors with adequate power necessitated a time-consuming multicentre design but ensured data represented were generalisable and represented up-to-date clinical practice. However, we found that the addition of the prespecified promising CT perfusion imaging, genetic, and immunohistochemical markers to the clinicopathological model failed to improve prediction substantially over and above the baseline model. For example, when immunohistochemical/genetic variables were included with the clinicopathological variables, sensitivity and specificity at the high vs medium/low-risk threshold, were slightly higher (sensitivity, 0.68 vs 0.57; and specificity, 0.76 vs 0.74) but not to a clinically meaningful extent. For CT perfusion variables, sensitivity and specificity at the high vs medium/low-risk threshold were similar to the clinicopathological model (sensitivity, 0.58 vs 0.57; and specificity, 0.75 vs 0.74). The belief that individual tumour biology influences prognosis, irrespective of stage, underpins recent extensive ‘omic’ research. For example, evidence suggests preoperative CT perfusion measures might predict subsequent recurrence by reflecting tumour angiogenesis and hypoxia . RAS mutational testing may predict response to anti-epidermal growth factor receptor therapy and microsatellite instability or immunohistochemistry testing for MMR proteins to identify Lynch syndrome . Accordingly, we hoped that these promising biomarkers of angiogenesis, hypoxia, and gene mutation would improve prediction. That none of these pre-specified biomarkers improved prediction to a clinically relevant extent when added to the baseline clinicopathological model highlights the challenges for novel biomarker research. A recent article highlighted that ‘omic’ research often ignores clinical data and/or fails to develop models appropriately . As proof, they developed a model for breast cancer survival that included stage, age, receptors, and grade. Adding gene expression failed to improve prediction and only became useful if clinical data were excluded altogether. Ultimately, the authors argued that omics, “may not be much more than surrogates for clinical data” . Similarly, researchers found that predictors of cardiovascular disease contributed little over and above basic clinical measurements . Expert opinion stipulated that our standard model includes extramural vascular invasion , and we found extramural vascular invasion to be statistically significant in both univariable and multivariable analyses. Further research and models should consider including extramural vascular invasion, including CT imaging-assessed invasion in the neoadjuvant setting. Our study has limitations. First, the number of participants developing distant metastasis or death was lower than expected from historical data (likely due to neoadjuvant therapy, improved surgery reducing resection margin positivity, and screening programmes that detect early-stage tumours), though target recruitment was achieved. Second, participants undergoing additional histopathological analysis were relatively small as our study was powered primarily for the CT imaging markers. Third, our findings should not be over-interpreted. While the baseline standard model was superior to standard current practice (AJCC staging), its clinical utility needs confirmation in daily practice. Finally, we made no comparison with commercial models (e.g. immunoscore ) that are used alongside TN staging. In summary, we found that a prognostic model based on prospectively derived prespecified standard clinicopathological variables outperformed TN staging by either improving specificity or sensitivity, the latter at the cost of diminished specificity with promise for clinical practice. The addition of previously published promising imaging, immunohistochemical, and genetic biomarkers in a robust multicentre prospective trial did not substantially improve prediction performance, highlighting the potential of over-optimism of published prognostic markers. Electronic Supplementary Material |
International practice patterns and perspectives on endovascular therapy for the treatment of cerebral venous thrombosis | 08ad86dc-34e9-4707-9d03-095654d5ca3a | 11874609 | Surgical Procedures, Operative[mh] | Cerebral venous thrombosis (CVT) is a rare form of stroke that leads to death or dependence in 10–15% of patients. – Due to the rarity of CVT, there is a lack of large, randomized trials to inform management, and existing treatment guidelines are consensus-based, primarily derived from observational studies or small clinical trials with limited statistical power. – Anticoagulation is the recommended first-line standard of care therapy, with endovascular therapy recommended for cases where there has been clinical deterioration despite anticoagulation. , The role of endovascular therapy (EVT) as an adjunctive first-line treatment in the management of CVT remains an area of uncertainty with limited evidence and diverse practices among clinicians. Clinical trials in this area are hindered by undefined eligibility criteria and limited uniformity in approaches to endovascular treatment. The only randomized trial in this area, the Thrombolysis or Anticoagulation for Cerebral Venous Thrombosis (TO-ACT), was stopped early for futility and a propensity score analysis from the large observational Anticoagulation in the Treatment of Cerebral Venous Thrombosis (ACTION-CVT) trial did not demonstrate a survival or functional benefit for EVT in CVT. , However, it is not clear whether EVT may benefit specific patient populations, and whether these populations vary by the technique used. Thus, decision-making around EVT in the treatment of CVT remains case-by-case and current practice patterns are not well known. To gain a comprehensive understanding of international practices and perspectives on EVT for CVT, we conducted a large, global survey of stroke clinicians across 61 countries aimed at elucidating existing practices in management and influencing factors when considering EVT for CVT. This resource will aid in selecting optimal patient populations, endovascular techniques, and postoperative management for future clinical trials and in clinical practice.
This survey received ethics approval from our institutional ethics board, the University of British Columbia Clinical Research Ethics Board (approval number H22-02916), and all participants gave informed consent. This report is conducted according to the CROSS guidelines. A survey comprising 42 questions (provided in Supplementary Data ) was distributed to stroke neurologists, neurointerventionalists, neurosurgeons, and other relevant clinicians globally through local networks and professional societies, including the Society of Vascular and Interventional Neurology, the German Stroke Trials Network, the Cerebral Venous Thrombosis Consortium, and the Women in Neurointervention WhatsApp group. Survey responses were recorded between May 2023 and October 2023. The time of estimated survey completion was 5–10 min. The survey was sent by electronic mail and answered online using the Qualtrics platform (Qualtrics, Provo, UT), and submission was only possible upon completion of the complete survey. The questionnaire was designed to collect responses on four main categories: (1) respondent demographics and experience with EVT for CVT; (2) clinical, radiographical, and procedural factors considered when assessing a patient for EVT; (3) endovascular techniques used and those never indicated for use in CVT treatment; and (4) use and timing of post-EVT imaging and medical management. Participation was voluntary, and responses were anonymized. Respondents were given the option to enter a random draw for a US$500 gift card to Amazon upon completing the survey. The study complied with local institutional research board regulations and informed consent was implied upon completing the survey. The survey was distributed in English except in China, where participants completed a version translated into Chinese (Mandarin) by a professional medical translator and peer-reviewed by a native speaking co-author in China (Y.C.). To identify regional differences in survey responses, countries were grouped by continent, specifically North America, Central America, South America, Europe, Asia, Oceania, and Africa. Regions were grouped based on sample size where appropriate. Given the distinct use of a translated survey for distribution in mainland China, we assessed responses from mainland China separately from the rest of Asia. Responses were also grouped by non-interventionalist stroke specialists and interventionalists (including interventional neurologists, interventional radiologists, interventional neuroradiologists, and neurosurgeons). Questions about technical considerations in EVT were directed through branching logic to interventionalists only. Categorical data were summarized as counts and percentages, and comparisons were made using the chi-squared test. The Bonferroni correction was used in the form of p -value adjustments for all post hoc analyses. For questions where participants were able to answer more than once, percentages are given relative to the number of respondents, explaining summative percentages above 1.00. To visualize qualitative rankings of factors influencing the use of EVT in patients with CVT, categories (not important, somewhat important and very important) were converted into unweighted, ordered integers (1, 2, and 3, respectively). Sankey diagrams for post-procedural management included only respondents to both modality and timing questions. Statistical significance was set at p ⩽ 0.05. Authors B.A.B. and T.S.F. had full access to all the data in the study and take responsibility for its integrity and the data analysis.
Respondent demographics and experience with EVT for CVT There were 2744 invited participants. The overall response rate of the survey was 31%, consisting of 863 respondents across 61 countries . Respondent demographics and experience with EVT for CVT are summarized in and , respectively. Briefly, the majority of respondents were employed in an academic hospital with a comprehensive stroke center (75.9%), with years of experience varying widely across respondents . We received responses from a wide range of relevant specialists, consisting mostly of stroke consultants (60.7%) and neurointerventionalists (34.6%). The majority of respondents had treated at least one case of CVT using EVT within the past 3 years (55.5%), and of those who had, most had done so in two to five patients (49.5%; ). In addition, most of these respondents (66.8%) estimated that EVT was used in less than 5% of CVT cases . The majority of respondents (74%) favored the use of EVT over standard medical treatment for patients with CVT in certain situations. Factors influencing the decision to use EVT for CVT Overall, clinical factors were ranked as more important than radiographic and procedural factors by respondents when considering the use of EVT for CVT treatment, although this was not a direct comparison made in the survey. In terms of clinical factors, worsening level of consciousness (LOC), worsening clinical deficits besides LOC, and having a trial of anticoagulation prior to EVT were ranked as the most important factors (ranked as “very important” by 86%, 76%, and 74% of respondents, respectively; ). As for radiographic and procedural factors, the location and burden of the thrombus, the probability of recanalization, and the interventionalist’s experience with EVT for CVT ranked as most important in this regard . The responses of interventionalists and non-interventionalists were similar with respect to the importance of clinical, radiographic, and procedural factors in the decision-making process ( Figure S1 ). Use of endovascular techniques for CVT Regarding access, almost all interventionalists (98%) believed the dural sinuses were amenable to endovascular intervention, while a minority believed the deep cerebral veins to be amenable to CVT (39%). There was substantial heterogeneity among interventionalists regarding preferred endovascular techniques , with mechanical thrombectomy with aspiration (56.2%) and stent retriever (50.5%) being the most utilized overall, followed by direct thrombolysis with tissue plasminogen activator (33.4%), direct intra-sinus administration of heparin (32.8%), and balloon angioplasty (23.4%). In addition to overall heterogeneity in techniques used, a chi-squared test comparing responses by region showed significant geographical variations ( p < 0.001). Differing from responses from other countries, interventionalist respondents from China instead used direct intra-sinus administration of heparin most commonly (56.3%). In contrast, this technique was ranked highest as “never indicated” by respondents outside of China (23%; ). Post-procedural imaging and medical management Preferred post-procedural imaging was largely distributed between magnetic resonance arteriography or venography (MRA/MRV; 71.8%) and computed tomography arteriography or venography (CTA/CTV; 65.9%); digital subtraction angiography (DSA) was used less frequently (18.2%; ). Respondents from Asia in particular showed a slight preference for MRA/MRV over CTA/CTV ( Figure S2 ). Timing of post-procedural imaging was also not consistent, with clinicians most frequently selecting an imaging timepoint of 24 h (69.2%). For immediate post-procedural medical management, low molecular weight heparin (LMWH; 82.5%) was preferred in contrast to unfractionated heparin (37.4%; ). North American respondents in particular favored unfractionated heparin over LMWH ( Figure S3 ). Preferred duration of initial post-procedural anticoagulation ranged from 0 to more than 10 days. Choice of subsequent oral anticoagulation was also split, with 70.8% of respondents preferring direct oral anticoagulants (DOACs) and 55.2% vitamin K antagonists . The preferred duration for long-term anticoagulation was most commonly 6–12 months (67.2%).
There were 2744 invited participants. The overall response rate of the survey was 31%, consisting of 863 respondents across 61 countries . Respondent demographics and experience with EVT for CVT are summarized in and , respectively. Briefly, the majority of respondents were employed in an academic hospital with a comprehensive stroke center (75.9%), with years of experience varying widely across respondents . We received responses from a wide range of relevant specialists, consisting mostly of stroke consultants (60.7%) and neurointerventionalists (34.6%). The majority of respondents had treated at least one case of CVT using EVT within the past 3 years (55.5%), and of those who had, most had done so in two to five patients (49.5%; ). In addition, most of these respondents (66.8%) estimated that EVT was used in less than 5% of CVT cases . The majority of respondents (74%) favored the use of EVT over standard medical treatment for patients with CVT in certain situations.
Overall, clinical factors were ranked as more important than radiographic and procedural factors by respondents when considering the use of EVT for CVT treatment, although this was not a direct comparison made in the survey. In terms of clinical factors, worsening level of consciousness (LOC), worsening clinical deficits besides LOC, and having a trial of anticoagulation prior to EVT were ranked as the most important factors (ranked as “very important” by 86%, 76%, and 74% of respondents, respectively; ). As for radiographic and procedural factors, the location and burden of the thrombus, the probability of recanalization, and the interventionalist’s experience with EVT for CVT ranked as most important in this regard . The responses of interventionalists and non-interventionalists were similar with respect to the importance of clinical, radiographic, and procedural factors in the decision-making process ( Figure S1 ).
Regarding access, almost all interventionalists (98%) believed the dural sinuses were amenable to endovascular intervention, while a minority believed the deep cerebral veins to be amenable to CVT (39%). There was substantial heterogeneity among interventionalists regarding preferred endovascular techniques , with mechanical thrombectomy with aspiration (56.2%) and stent retriever (50.5%) being the most utilized overall, followed by direct thrombolysis with tissue plasminogen activator (33.4%), direct intra-sinus administration of heparin (32.8%), and balloon angioplasty (23.4%). In addition to overall heterogeneity in techniques used, a chi-squared test comparing responses by region showed significant geographical variations ( p < 0.001). Differing from responses from other countries, interventionalist respondents from China instead used direct intra-sinus administration of heparin most commonly (56.3%). In contrast, this technique was ranked highest as “never indicated” by respondents outside of China (23%; ).
Preferred post-procedural imaging was largely distributed between magnetic resonance arteriography or venography (MRA/MRV; 71.8%) and computed tomography arteriography or venography (CTA/CTV; 65.9%); digital subtraction angiography (DSA) was used less frequently (18.2%; ). Respondents from Asia in particular showed a slight preference for MRA/MRV over CTA/CTV ( Figure S2 ). Timing of post-procedural imaging was also not consistent, with clinicians most frequently selecting an imaging timepoint of 24 h (69.2%). For immediate post-procedural medical management, low molecular weight heparin (LMWH; 82.5%) was preferred in contrast to unfractionated heparin (37.4%; ). North American respondents in particular favored unfractionated heparin over LMWH ( Figure S3 ). Preferred duration of initial post-procedural anticoagulation ranged from 0 to more than 10 days. Choice of subsequent oral anticoagulation was also split, with 70.8% of respondents preferring direct oral anticoagulants (DOACs) and 55.2% vitamin K antagonists . The preferred duration for long-term anticoagulation was most commonly 6–12 months (67.2%).
The management of CVT presents a significant challenge due to its rarity and the absence of well-established treatment guidelines. Only one randomized trial has assessed the efficacy of EVT compared with the standard medical management. The TO-ACT trial, published in 2019, included patients with characteristics associated with adverse outcomes, including ICH at baseline, deep venous involvement, decreased level of consciousness, or “mental status disorder.” Participants were randomized to EVT without prespecified techniques versus best medical therapy and the trial stopped for futility after 67 patients of a target of 164 were recruited, failing to demonstrate a benefit for the primary outcome of modified Rankin scale (mRS) 0–1 at 12 months. When meta-analyzed with the multinational, observational ACTION-CVT cohort study, there remained no consistent signal of benefit for EVT in the treatment of CVT. Subsequent observational studies have not demonstrated significant benefit of using EVT, but also have not identified safety concerns with EVT used as an adjunct therapy. , , In addition, full recanalization was found to be associated with improved outcomes, although sample sizes were small. The clinical landscape of EVT for CVT is reminiscent of early attempts at EVT for acute ischemic stroke, perhaps failing to show efficacy due to a lack of optimized patient selection criteria or technique use. In addition, the interpretation of outcomes from the existing observational literature is substantially influenced by selection bias. The high mortality rates in CVT patients treated with EVT arises from the fact that only the most severely affected patients undergo EVT, because it is most commonly used as a rescue therapy. , While not well understood, determining the impact of EVT on the treatment of CVT will be crucial in progressing patient outcomes. This survey represents the largest, most comprehensive characterization of EVT use in CVT to date, reflecting the experience of 863 stroke clinicians across 61 countries. Our findings confirm that despite the paucity of literature to date supporting EVT, it continues to be used for therapy in certain cases of CVT. However, techniques and post-procedural management are varied, and substantial uncertainty remains around the characteristics that might make a patient with CVT a “good” candidate for EVT. The survey uncovered substantial heterogeneity in the techniques employed for EVT, with mechanical thrombectomy with aspiration, mechanical thrombectomy with stent retriever, direct thrombolysis with tissue plasminogen activator, direct administration of heparin, and balloon angioplasty being the most commonly used. This diversity in approach reflects the lack of evidence-based guidelines in this area stemming from the absence of robust trials assessing the superiority of one technique over another. Instead, the most commonly used techniques reflect those used for the treatment of acute ischemic stroke, suggesting clinicians’ choice of technique may be influenced by their familiarity with those techniques and comfort in using those associated devices. Regional variations were also observed, most strikingly with China favoring direct anticoagulation with heparin, a technique ranked most commonly as “never indicated” in other parts of the world. There have been few studies comparing EVT techniques for CVT and currently there is insufficient evidence to suggest which endovascular approach and device is optimal. , Overall, our findings emphasize a need for contemporary clinical trials to guide clinical decision-making and establish evidence-based standardized practices. By providing a comprehensive characterization of indications, techniques, and postoperative management used by clinicians internationally, this resource will aid in optimizing patient selection and endovascular treatments for future trials and clinical decision-making. This study is subject to limitations common to physician surveys, including a response rate of 31% and the possibility of responder bias and centrality bias. The survey may have been completed preferentially by respondents favoring EVT in CVT, and thus our results may differ from CVT care in general. Given that the survey was not translated to languages other than Chinese, it is possible that there may have been differential interpretation of survey questions or selection bias for English-speaking clinicians. Furthermore, outside of China, only respondents that comprehended academic-level English were able to complete the survey. We also cannot definitively exclude the possibility of duplicate responses, although IP addresses were reviewed to ensure no duplicates were present. Importantly, the wide scope of our survey limited our exploration of specific factors influencing decision-making regarding the use of EVT for CVT, which is a needed area for future research, and would help to further improve patient selection in subsequent clinical trials. Regardless, this work provides a baseline for understanding current international practice patterns for the treatment of a rare disease based on hundreds of responses from expert clinicians.
This international survey highlights the considerable heterogeneity in the approaches to EVT for CVT among stroke clinicians globally. The lack of standardized practices in patient selection, procedural techniques, and post-procedural management emphasizes a persisting need for contemporary high-quality clinical evidence to guide practice, and this characterization of these factors will act as a resource to guide future clinical investigations. Future trials should not only assess the efficacy and safety of different EVT approaches, but also provide guidance on patient selection criteria and post-procedural care. Establishing a global consensus on EVT protocols for CVT based on the availability of local devices will be crucial in improving patient outcomes and fostering a more evidence-based approach to the management of this challenging condition.
sj-pdf-1-wso-10.1177_17474930241304206 – Supplemental material for International practice patterns and perspectives on endovascular therapy for the treatment of cerebral venous thrombosis Supplemental material, sj-pdf-1-wso-10.1177_17474930241304206 for International practice patterns and perspectives on endovascular therapy for the treatment of cerebral venous thrombosis by Benjamin A Brakel, Alexander D Rebchuk, Johanna Ospel, Yimin Chen, Manraj KS Heran, Mayank Goyal, Michael D Hill, Zhongrong Miao, Xiaochuan Huo, Simona Sacco, Shadi Yaghi, Ton Duy Mai, Götz Thomalla, Grégoire Boulouis, Hiroshi Yamagami, Wei Hu, Simon Nagel, Volker Puetz, Espen Saxhaug Kristoffersen, Jelle Demeestere, Zhongming Qiu, Mohamad Abdalkader, Sami Al Kasab, James E Siegler, Daniel Strbian, Urs Fischer, Jonathan Coutinho, Anita Munckhof, Diana Aguiar de Sousa, Bruce CV Campbell, Jean Raymond, Xunming Ji, Gustavo Saposnik, Thanh N Nguyen and Thalia S Field in International Journal of Stroke sj-pdf-2-wso-10.1177_17474930241304206 – Supplemental material for International practice patterns and perspectives on endovascular therapy for the treatment of cerebral venous thrombosis Supplemental material, sj-pdf-2-wso-10.1177_17474930241304206 for International practice patterns and perspectives on endovascular therapy for the treatment of cerebral venous thrombosis by Benjamin A Brakel, Alexander D Rebchuk, Johanna Ospel, Yimin Chen, Manraj KS Heran, Mayank Goyal, Michael D Hill, Zhongrong Miao, Xiaochuan Huo, Simona Sacco, Shadi Yaghi, Ton Duy Mai, Götz Thomalla, Grégoire Boulouis, Hiroshi Yamagami, Wei Hu, Simon Nagel, Volker Puetz, Espen Saxhaug Kristoffersen, Jelle Demeestere, Zhongming Qiu, Mohamad Abdalkader, Sami Al Kasab, James E Siegler, Daniel Strbian, Urs Fischer, Jonathan Coutinho, Anita Munckhof, Diana Aguiar de Sousa, Bruce CV Campbell, Jean Raymond, Xunming Ji, Gustavo Saposnik, Thanh N Nguyen and Thalia S Field in International Journal of Stroke sj-pdf-3-wso-10.1177_17474930241304206 – Supplemental material for International practice patterns and perspectives on endovascular therapy for the treatment of cerebral venous thrombosis Supplemental material, sj-pdf-3-wso-10.1177_17474930241304206 for International practice patterns and perspectives on endovascular therapy for the treatment of cerebral venous thrombosis by Benjamin A Brakel, Alexander D Rebchuk, Johanna Ospel, Yimin Chen, Manraj KS Heran, Mayank Goyal, Michael D Hill, Zhongrong Miao, Xiaochuan Huo, Simona Sacco, Shadi Yaghi, Ton Duy Mai, Götz Thomalla, Grégoire Boulouis, Hiroshi Yamagami, Wei Hu, Simon Nagel, Volker Puetz, Espen Saxhaug Kristoffersen, Jelle Demeestere, Zhongming Qiu, Mohamad Abdalkader, Sami Al Kasab, James E Siegler, Daniel Strbian, Urs Fischer, Jonathan Coutinho, Anita Munckhof, Diana Aguiar de Sousa, Bruce CV Campbell, Jean Raymond, Xunming Ji, Gustavo Saposnik, Thanh N Nguyen and Thalia S Field in International Journal of Stroke
|
Moral distress among maternal-fetal medicine fellows: a national survey study | 0fe38a9d-75a3-4387-bd09-6a9215b8f9f6 | 11869608 | Surgical Procedures, Operative[mh] | Moral distress, or the inability to carry out what one believes to be ethically appropriate because of uncontrollable constraints or barriers, is understudied in obstetrics and gynecology (OB/GYN), and specifically in Maternal-Fetal Medicine (MFM). Originally identified in nursing, the concept of moral distress has since been studied among an array of healthcare professionals, including physicians, respiratory therapists, social workers, and healthcare organization administrators . Moral distress can stem from factors such as challenges in clinical care, barriers in interdisciplinary collaboration, and systems inefficiencies. Moral distress can contribute to burnout and intention to leave one’s position in healthcare. The original validated survey, the Moral-Distress Survey-Revised (MDS-R) was re-evaluated recently and redesigned as the Measure of Moral Distress – Healthcare Professionals (MMD-HP) which is specifically geared towards healthcare professionals. However, to our knowledge, moral distress using this validated tool has not been studied among OB/GYNs or MFMs, and not at a national level. Rising maternal mortality and increasing reproductive rights restrictions in large parts of the country would both likely impact OB/GYN job satisfaction and feelings of moral distress . There is evidence already that OB/GYN trainees are avoiding pursuing training in states with reproductive restrictions , likely due to a combination of limitations around providing evidence-based care and concerns regarding legality of management of pregnancy complications . Specifically, as MFM physicians are often the ones to diagnose and counsel regarding high-risk maternal, pregnancy, and fetal complications (such as cyanotic maternal cardiac disease, previable rupture of membranes, and severe fetal anomalies), restrictions on pregnancy management uniquely impact those within MFM. Therefore, we aim to establish a baseline measure of moral distress among MFM fellows, and to compare measures of moral distress between fellows who practice in regions with differing rates of maternal mortality and reproductive restrictions.
We performed an anonymous cross-sectional national survey of MFM fellows in the United States using a validated survey tool, the Measure of Moral Distress – Healthcare Professionals (MMD-HP) (Supplemental materials ). This validated survey describes 27 scenarios (i.e., “participate in care that causes unnecessary suffering or does not adequately relieve pain or symptoms” and “participate in care that I do not agree with but do so because of fears of litigation”) and asks participants to rate the frequency of scenario and level of distress per scenario, on a 5-point Likert scale (scored from 0 to 4). Scores are first multiplied across frequency and level of distress, then summed across the 27 questions. The total score can range from 0 to 432 points, with higher scores indicating an increase in moral distress. After the validated questions are addressed, surveys can be supplemented with additional specialty-specific scenarios to explore additional areas of moral distress. These supplemental questions however, do not contribute to the overall score. For this study, we created six supplemental scenarios describing the balance of maternal risk with fetal benefit, situations of medical uncertainty or futility, and allocation of resources (Supplemental materials ). These supplemental questions were developed iteratively and internally within our own MFM division. The score for the supplemental section can range from 0 to 96 points. The survey also gathered basic demographic questions such as characteristics of the training program, religiosity, and political affiliation and included a free text field where respondents could optionally choose to elaborate on specific scenarios of moral distress. This voluntary 15-minute survey was disseminated via electronic mail to all MFM fellows in the United States either via direct email or through program coordinators according to the Society for Maternal-Fetal Medicine Fellowship Directory. All MFM fellows in training, either in stand-alone MFM programs or combined programs (i.e., genetics, critical care) were invited to participate. In addition, the survey link was shared on various social media groups and listservs with MFM trainee members. We estimated a total of 400 MFM fellows based on program rosters and anticipated a response rate of 60%. We offered twelve randomly selected participants a twenty-five dollar e-gift card as incentive for completing the study. Only completed responses were analyzed. The survey was open from February 7, 2024 to May 5, 2024. Responses were captured and analyzed on the secure platform Qualtrics. Respondents were also grouped based on state of training according to abortion restrictions and maternal mortality rates. For abortion restriction, we referred to the Guttmacher Institute abortion access map (designations active as of May 2024) and collapsed their seven categories into four groups for ease of analysis . States belonging to the Guttmacher Institute’s “most restrictive” and “very restrictive” categories were collapsed into a group we named “Abortion very restricted”. States belonging to the Guttmacher Institute’s “restrictive” category was renamed “Abortion restricted”. States belonging to the Guttmacher Institute’s “some restrictions/protections” and “protective” categories were renamed “Abortion protected”. Finally, states belonging to the Guttmacher Institute’s “very protective” and “most protective” categories were renamed “Abortion very protected” for the purposes of this study (Table ). Additionally, we ranked states by maternal mortality rates per 100,000 births based on the most recent report from the Center for Disease Control from 2018 to 2021 . For states with unreportable statistics due to privacy protections, we looked up state specific reports for birth rates from 2018 to 2021 and calculated the maternal mortality rate per 100,000 births . States were then assigned one of four maternal mortality groups ranging highest to lowest maternal mortality as follows: “Highest mortality” (26.3–43.5 maternal deaths per 100,000 births), “High mortality 3” (21.7–25.7 maternal deaths per 100,000 births), “Mid-mortality” (16.7–21.2 maternal deaths per 100,000 births), and “Low mortality” (4.8–16.4 maternal deaths per 100,000 births). Of note, 38 states and the District of Columbia currently have MFM programs (See Supplemental materials for a list of all 50 states and District of Columbia that do or do not have a fellowship program and their designations with regards to abortion restrictions and maternal mortality). Geographic regions were defined according to the National Geographic . We used Student t-test and ANOVA to calculate unadjusted associations between moral distress and demographic variables, category of abortion restrictions, and category of maternal mortality. Multivariable linear regression was used to examine the association between (1) abortion restrictions and moral distress and (2) maternal mortality and moral distress, adjusting for a priori determined demographic variables (age, gender identity, race/ethnicity, year of training, political identification, and religious identification). Thematic analysis, a well-established research methodology to organize qualitative data into series of themes or patterns , was performed for the free text responses elaborating upon moral distress and grouped by thematic elements. The study was approved as exempt by the Institutional Review Board at our institution.
Of 245 responses (61% response rate), we analyzed 177 complete responses (44% complete response rate). 68 responses were not analyzed due to incomplete nature (most often demographic information was provided, but no scenarios were scored with respect to moral distress). We received at least one response from every state and the District of Columbia with an MFM fellowship other than the state of Arkansas. Most of our respondents identified as female (78.5%), White (71.8%), aged 31–35 years (72.9%), and are training in urban programs (83.1%) that are academic/university-affiliated (92.1%). (Table ). 37.9% of respondents are training in the Northeast, with the remainder of respondents evenly distributed across the U.S. geographically. Responses were evenly distributed across levels of training, and 12 fellows reported training within a combined program (8 medical genetics, 2 anesthesia/critical care, 1 addiction medicine, and 1 clinical informatics). Most (39.5%) are training at a hospital with an annual delivery volume of 3001–5000 births. 32.8% of our respondents identified as religious and 72.9% reported a political affiliation, with 90.7% of those affiliated with the Democratic party (Table ). The mean score for all respondents for the validated portion of the questionnaire was 85.9 ± 48.8. Female gender identity was associated with higher measures of moral distress on the validated portion of the questionnaire as compared to male gender identity (90.1 ± 49.2 vs. 70.4 ± 44.7, p < 0.05), whereas more advanced training was associated with higher measures of moral distress on the supplemental questions (20.9 ± 11.8 vs. 28.5 ± 15.9 vs. 25.9 ± 15.6 for PGY-5 vs. PGY-6 vs. PGY-7 and PGY-8 combined, respectively, p < 0.05) (Table ). There was no association between training in states with various levels of abortion restriction or maternal mortality and moral distress on bivariate analysis for either the validated questionnaire or the supplemental questions (Tables and ). In our multivariable linear regression model examining the association between moral distress and abortion restrictions, higher moral distress on the validated questionnaire was associated with training in a state with increasing abortion restrictions (Beta estimates are all positive when comparing “Abortion most restrictive”, “Abortion restrictive” and “Abortion protective” vs. “Abortion most protective”; beta estimate 27.80 and p < 0.01 when comparing association between moral distress and training in a state within “Abortion restrictive” as compared to “Abortion most protective”) (Table ). Given the supplemental questions are not validated, we did not perform modeling with this subset of the questionnaire. In our multivariable linear regression model examining the association between moral distress and maternal mortality, we did not find any associations. 34 (19.2%) respondents provided free responses, and thematic analysis revealed several themes. The most commonly referenced theme was around abortion and reproductive justice (22 responses, 64.7%), with the following illustrative quotes: I feel moral distress all the time for patients who are traveling here to get expensive care and pay out of pocket [for care] that they could have safely had provided locally by perfectly well qualified providers, but cannot get the care they need locally because of state laws and policies that prohibit and deny payment for needed services. It’s appalling. I work in a Catholic hospital in an abortion restrictive state. I have huge amounts of moral distress because my patients do not have access to contraception in our hospital, and cannot chose a tubal during a C-section for example, or be discharged with LARC placement, and on an on. Then, as an extra layer, the state does not allow abortion care, which is hugely restrictive to my patients, traveling out of state isn’t possible. They need this care and I cannot provide it. Not being able to offer termination when pregnancy outcomes are poor but maternal life not in danger (ex previable PPROM without evidence of infection). Other themes included patients not receiving standard of care due to various institutional or provider differences (5 responses, 14.7%), with the following illustrative quote: Witnessing very disparate quality of care between private MFM office and resident/fellow clinics. Themes also referenced moral distress as resulting from interdisciplinary power dynamics (3 responses, 8.8%), with the following illustrative quote: Lack of clear communication between inter- intradisciplinary teams, individualized care instead of teams based care on complex topics, resistance from other teams to accept consult advice. Another theme surrounded systemic issues involving barriers to payment or other social determinants of health (3 responses, 8.8%), with the following illustrative quote: Caring for patients whose socioeconomic circumstances significantly impact their care but I cannot improve those circumstances. Finally, the remaining responses expanding on moral distress emphasized lack of program support (1 response, 2.9%) and medical futility such as in areas of classical cesarean birth at periviable gestational ages (1 response, 2.9%).
In our study of moral distress among MFM fellows, we found that respondents reported an average distress score of 85.9 ± 48.8, which is on par with previously published scores, such as a score of 96.3 + 54.7 among physician respondents in the study that validated the MMD-HP , and female-identifying respondents reported higher measures of moral distress than male-identifying respondents on the validated questions. The association between female gender identity and moral distress has been reported in previous studies , with speculations regarding varying levels of moral resilience or sensitivity at the root of this finding. In the context of our study, it is possible that female-identifying fellows training in abortion restricted states directly feel the weight of reproductive coercion to a further extent than their male colleagues. Previous studies have also found inconsistent associations between length of training and the perception of moral distress, with some suggesting that the “crescendo effect,” or the buildup of moral distress over time, may disproportionately affect those in training for longer . In the context of this study, it is possible that more senior fellows are more likely to be coordinating care of medically complex individuals at the cusp of medical uncertainty or futility, and may bear the brunt of challenging clinical care, or have had more cumulative exposure to scenarios of moral distress over time. In bivariate analyses, we did not find significant differences between moral distress and abortion restrictions or maternal mortality. In our multivariable regression model, there was a consistent trend towards more distress among fellows training in states with increasing abortion restrictions, and this difference was significant between those training in “Abortion restrictive” states as compared to “Abortion most protective” states. Recent evidence surrounding moral distress after the Dobbs decision among OB/GYNs support increased moral distress reported by providers in more restrictive states. In a 2023 survey study of 253 abortion providers, those in restrictive states reported higher measures on the moral distress thermometer (which is a visual scale between 0 and 10) as compared to those in protective states . However, in this same study, providers in protective states reported moral distress in the context of caring for those seeking care and having received substandard care from out-of-state with an overburdening of health systems within protective states . It is possible that in our interrelated and increasingly interconnected society, practicing in a silo is a progressively obsolete idea, and policies that impact any patient or provider can have extensive effects, which may dull the observed difference in moral distress between practitioners in restrictive versus protective states. We did not see differential measures of moral distress based on training within states with various levels of maternal mortality. Maternal deaths are fortunately infrequent, and these events may not have reached a clinical threshold to be reflected in perceptions of moral distress amongst trainees. However, there is considerable overlap between states with higher abortion restrictions and maternal mortality, and vice versa (Supplement ). An in-depth analysis of the free responses regarding limitations on abortion access and reproductive freedoms further detail specific moral distress as perceived by 19% of respondents who provided qualitative data. Respondents describing feeling held back from being able to offer care they have been trained to provide due to legal and institutional pressures, as well as distress on behalf of patients who incur additional barriers (travel, logistical, financial) in navigating fraught and difficult situations. In addition, providers feel gagged from even discussing the range of options for patients with life-limiting fetal diagnoses or precarious maternal status. Again, it is possible that by utilizing a validated survey tool for this study, we missed out on the opportunity to gear questions to specific challenges within OB/GYN and MFM. However, our supplemental questions and free response field did allow us to capture more nuanced sentiments that could be the basis of the next iteration of surveying regarding moral distress. Our study has several limitations. First, we were limited by response rate, which introduces significant bias and limits our interpretation of results. Given the voluntary nature of this survey, response bias likely played a significant role in our findings. Our survey was also fairly lengthy, with 33 clinical scenarios, each requesting a level and frequency of distress. We received a total of 245 responses, but 68 survey responses were incomplete and subsequently discarded. The survey itself may benefit from abbreviation and improved specificity as applied to MFM. We considered comparing characteristics between respondents and non-respondents among all MFM fellows to further contextualize our results, but beyond place of training, other demographic characteristics (gender identity, race/ethnicity, etc.) are not assignable without direct questioning. The strength of this study includes the use of a validated survey tool, as well as introduction of additional fields that were both hypothesis generating and allowed for ad lib elaboration on any causes of moral distress within MFM. This allowed us to learn from our colleagues’ particular challenges, both universal and specific to their location of training. Our pool of respondents were fairly representative of the demographics of the MFM fellows nationally and we received geographically diverse responses. We were able to solicit a large number of complete responses, though under our anticipated response rate. Further research could focus on methods to improve this response rate to reduce bias, such as utilizing a more user-friendly and targeted survey tool.
MFM fellows who identify as female reported higher measures of moral distress, as well as those training in states with more abortion restrictions. Free text responses reveal abortion restrictions to underlie a significant proportion of moral distress. Higher measures of moral distress can lead to physician burnout, compromised patient care, and loss of quality providers, especially in underserved regions. It is especially imperative in our current sociopolitical climate to support physicians directly impacted by legislative restrictions and to find ways of mitigating moral distress in the absence of significant legal change.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3
|
Beyond minutiae: inferring missing details from global structure in fingerprints | b5622e85-43bb-4f8a-a74c-a9ce2638709b | 11790533 | Forensic Medicine[mh] | The sweeping outline of a scene catches our eye before we notice the intricate details within it—much like recognizing a friend from afar before discerning their facial features up close. Navon demonstrated this global precedence experimentally by showing participants a series of large letters composed of smaller ones. Participants identified the large letters more quickly than the smaller ones. Our capacity to rapidly process the ‘gist’ of complex images has since been widely demonstrated. Oliva and Torralba showed that people can quickly grasp the gist of a scene—distinguishing a bustling cityscape from a serene forest—by relying on low-dimensional spatial configurations that form a global summary of the whole image. Indeed, people can categorize natural scenes with remarkable accuracy at image resolutions as low as 32 × 32 pixels (Torralba, ; Wolfe & Kuzmova, ) and they can discriminate them at resolutions as low as 2 × 2 pixels (Searston et al., ). At these low image resolutions, the finer details vanish, leaving behind a global summary of the image as the basis for category judgments. This ability to glean the global structure of a visual scene from low-dimensional information allows for rapid and accurate categorization, freeing up cognitive resources for detailed analysis of finer elements within the scene. The ability to rapidly extract global information and make accurate inferences based on limited visual input is a hallmark of human visual cognition (Brady, Stormer, & Alvarez, ; Oliva & Torralba, ). This rapid processing of global information is not only efficient but also serves to guide subsequent attention to relevant local features (Wolfe et al., ). The processing of global and local information can also be likened to holistic and part-based mechanisms in face recognition. While global processing prioritizes an overarching summary of the visual input, akin to holistic processing, local processing focuses on analyzing finer, more granular features, akin to part-based strategies. Indeed, research in face recognition suggests that both holistic and part-based processing may contribute to superior visual recognition (Belanova et al., ). However, recent findings suggest that the contribution of holistic and part-based processing may differ across tasks and stages of processing. For example, while holistic processing is generally more dominant during recognition, part-based processing may play a crucial role during learning, particularly for unfamiliar faces (Leong, Estudillo, & Ismail, ; Chua & Gauthier, ). Additionally, individual differences in recognition ability are linked to both mechanisms, but not uniformly: some individuals rely more heavily on holistic processing, while others demonstrate superior featural analysis, reflecting distinct underlying strategies rather than a single holistic mechanism (Rezlescu et al., ). These findings show that global/holistic and local/part-based processing each contribute differently depending on the context—including the type of task, familiarity with the stimuli, and individual expertise. Our capacity to extract global visual structure is critical not just for recognizing faces or categorizing natural scenes, but also for expert decision-making in domains like radiology and fingerprint examination. Radiologists can swiftly diagnose abnormalities in medical images at a momentary glance (Brennan et al., ; Nodine et al., ). Expert chess players can rapidly extract meaningful patterns from complex board configurations (Gobet & Simon, 1996; Palmeri, Wong, & Gauthier, 2004). Good tennis players can anticipate opponents’ movements well before they occur (Williams et al., ). And seasoned birdwatchers can efficiently identify different species in less than half a second (Tanaka & Curran, ). Some of these expert abilities are said to rely on holistic processing, where the configuration of features is processed as an integrated whole rather than as isolated parts (Tanaka & Farah, 1993; Gauthier & Tarr, 2002). Across domains, these feats of expertise demonstrate that a well-developed sensitivity to the global structure of a scene or an image is crucial for supporting accurate visual inferences in a variety of contexts. In the present study, we extend this work into the domain of fingerprint examination. We explore the role of global processing as a function of expertise by investigating the extent to which novices and experts can accurately distinguish fingerprints without the minutiae. Expertise in fingerprint examination While media portray fingerprint examination as computer-driven, it fundamentally relies on human expertise. Expert examiners manually compare latent fingerprints found at crime scenes to prints in police databases. This comparison process is complicated by distortions and variations in latent impressions and the increasing similarity of prints retrieved by database searches as computer algorithms improve (Dror & Mnookin, ), and the potential for contextual information to introduce bias in expert judgments (Kukucka & Dror, ). The diverse range of cases means that examiners rarely build familiarity with any one individual’s prints. Despite these challenges, fingerprint experts exhibit remarkable accuracy in their comparison decisions, even under less-than-ideal conditions (Growns et al., ; Tangen et al., ; Tangen et al., ; Ulery et al., ). The primary task of a fingerprint examiner is to infer whether two prints belong to the same finger or different fingers. This task is often described as a careful comparison of local features in the prints called ‘minutiae’—and experts outperform novices at searching and locating specific features in prints (Hicklin et al., ; Robson et al., ). In contrast to expectations from face recognition research, Vogelsang, Palmeri, and Busey found only weak evidence for holistic processing by experts using a composite fingerprint task adapted from the face recognition literature which suggests that local processing strategies may play a large role. However, experts can also reliably distinguish prints even when the minutiae are obscured or no longer available. Fingerprint experts can accurately identify prints clouded in visual noise (Thompson & Tangen, ) and presented after a time delay (Corbett et al., ). Studies using eye-tracking methods have shown that experts make smaller, more precise eye movements when viewing prints compared to novices (Busey & Vanderkolk, ; Busey et al., ), suggesting they are more adept at extracting global ‘holistic’ information without exhaustively searching for local features across various regions of a print (Busey & Parada, ). Evidence suggests that fingerprint experts are sensitive to the global information distributed across different fingers of the same individual. Normally, these experts compare prints at the level of the individual finger : their task is to distinguish different impressions left by the same finger of the same individual (e.g., Smith’s right thumb) and different impressions left by different individuals. Searston and Tangen , however, tested whether fingerprint experts can also discriminate prints at the individual person level. In other words, how well can these experts distinguish between different impressions left by different fingers of the same person (e.g., prints from Smith’s right thumb, index, ring, middle or little fingers) and different impressions left by different fingers of different people. In this task, it is impossible to rely on a careful comparison of minutiae in each print because these local features and patterns vary across an individual’s fingers. Despite this variability, even novices performed above chance at distinguishing prints that were different impressions from different fingers of the same individual—and the experts were considerably more accurate than the novices. This example illustrates that there is also global structure distributed across an individual’s fingerprints and that experts have a heightened sensitivity to this global information relative to novices. In casework, fingerprint experts are trained to conduct a detailed analysis of the minutiae in the latent print before comparing it to prints from known individuals (Robson et al., , ), with some employing bias-reduction techniques like linear sequential unmasking to enhance decision accuracy (Dror et al., ). However, the above demonstrations suggest that fingerprint experts are not merely relying on local feature comparisons but are leveraging both global and local processing to achieve their remarkable accuracy. This sensitivity to global structure in prints likely comes about with extensive exposure to prints (Kellman & Garrigan, ; Richler & Palmeri, ). Longitudinal evidence shows that fingerprint trainees get better at discriminating fingerprints and fingerprint patterns as they progress through their on-the-job training to become experts (Searston & Tangen, , ). More recent experimental evidence has also shown that statistical summary information can facilitate perceptual learning in fingerprint examination (Growns et al., , ). This research suggests that experts are drawing on a mental repository of similar prints (Brooks, ; Medin & Schaffer, ) that allows them to build a richer global representation prior to feature segmentation (Oliva & Torralba, )—and that this enriched global impression supports more efficient analysis of the finer details. Inferring missing details in fingerprints A heightened sensitivity to global information may also facilitate accurate inferences based on incomplete data. Training to infer missing features from a category instance can result in better transfer to novel situations compared with standard training methods (Jones & Ross, ). Deducing that a bee with pale opalescent blue stripes on its abdomen must have a burrow made of soft stone is an inference that emphasizes commonalities among category members. Conversely, inferring the category label “blue banded bee” from the exemplar emphasizes information distinguishing between categories (Chin-Parker & Ross, ). This sensitivity to visual structure helps experts make accurate inferences even with imperfect visual information—crucial in the context of fingerprint examination. Given the varied conditions under which fingerprint examiners work, they often need to make decisions based on incomplete information. Fingerprint experts sometimes work with pristine, fully rolled prints captured by a computerized fingerprint scanner. At other times, prints can be highly distorted or incomplete. Variation in surface, pressure, movement, skin residue, and even the compounds used to lift or capture a crime-scene (latent) print—such as phosphorescent dye—can affect how a print appears and what aspects of it might be missing. Imagine cradling a glass in your hand and loosening and tightening your grip. If you were to try this exercise, you may notice how different parts of each finger make contact with the glass, and that as you adjust your grip, your skin spreads and folds across the surface. An examiner’s appreciation for the gist of a print, and the redundancies dispersed across it, might help them infer what might be missing in these challenging circumstances. The present experiments The present experiments test the hypothesis that fingerprint experts can leverage global information to infer missing details in highly distorted or incomplete latent prints more effectively than novices. We designed two experiments to limit participants’ reliance on minutiae when comparing prints: In Experiment 1, participants engage in a Fill-in-the-Fragment task (Fig. ). They must infer the visual detail missing from a blank space cropped from a print, relying solely on the surrounding visual context of the print. This setup assesses their ability to use global context to reconstruct incomplete prints. Experiment 2 employs a Fragment Comparison task. Participants compare small windows or ‘fragments’ of visual detail sampled from different regions (of different impressions) of the same finger, or different regions of a different finger altogether. Here, they must infer the missing visual surrounds of each fragment, further testing their capacity to use global visual patterns for accurate decisions. By comparing the performance of experts and novices across these visual inference tasks, this research aims to understand how fingerprint experts use global information to make accurate decisions. We aim to determine whether their expertise enables them to compensate for missing or obscured minutiae by relying on global visual patterns. This investigation builds on previous studies exploring the role of global or holistic processing in face and scene recognition and seeks to isolate the role of global information in fingerprint examination. Experiment 1: Fill in the Fragment In Experiment 1, we tested how well people can infer missing sections from a fingerprint and compared the performance of expert fingerprint examiners to that of novices. Building on Searston and Tangen’s findings—which demonstrated that fingerprint experts can extract global information distributed across different fingers of the same person—we explored whether sensitivity to such distributed information would enable experts to infer missing visual details from a print. Specifically, we examined whether individuals could deduce the correct section of ridge detail missing from a print based on surrounding information—ridge flow, thickness, and friction ridge characteristics dispersed across a fingerprint. To investigate this, we recruited a group of fingerprint experts and an age- and gender-matched group of fingerprint novices to complete a Fill-in-the-Fragment task. Participants had to infer the missing fragment of a fingerprint based on the surrounding context. The critical question was: can people use the global structure or style of a person’s fingerprint to accurately infer a small piece of missing friction ridge skin detail? While media portray fingerprint examination as computer-driven, it fundamentally relies on human expertise. Expert examiners manually compare latent fingerprints found at crime scenes to prints in police databases. This comparison process is complicated by distortions and variations in latent impressions and the increasing similarity of prints retrieved by database searches as computer algorithms improve (Dror & Mnookin, ), and the potential for contextual information to introduce bias in expert judgments (Kukucka & Dror, ). The diverse range of cases means that examiners rarely build familiarity with any one individual’s prints. Despite these challenges, fingerprint experts exhibit remarkable accuracy in their comparison decisions, even under less-than-ideal conditions (Growns et al., ; Tangen et al., ; Tangen et al., ; Ulery et al., ). The primary task of a fingerprint examiner is to infer whether two prints belong to the same finger or different fingers. This task is often described as a careful comparison of local features in the prints called ‘minutiae’—and experts outperform novices at searching and locating specific features in prints (Hicklin et al., ; Robson et al., ). In contrast to expectations from face recognition research, Vogelsang, Palmeri, and Busey found only weak evidence for holistic processing by experts using a composite fingerprint task adapted from the face recognition literature which suggests that local processing strategies may play a large role. However, experts can also reliably distinguish prints even when the minutiae are obscured or no longer available. Fingerprint experts can accurately identify prints clouded in visual noise (Thompson & Tangen, ) and presented after a time delay (Corbett et al., ). Studies using eye-tracking methods have shown that experts make smaller, more precise eye movements when viewing prints compared to novices (Busey & Vanderkolk, ; Busey et al., ), suggesting they are more adept at extracting global ‘holistic’ information without exhaustively searching for local features across various regions of a print (Busey & Parada, ). Evidence suggests that fingerprint experts are sensitive to the global information distributed across different fingers of the same individual. Normally, these experts compare prints at the level of the individual finger : their task is to distinguish different impressions left by the same finger of the same individual (e.g., Smith’s right thumb) and different impressions left by different individuals. Searston and Tangen , however, tested whether fingerprint experts can also discriminate prints at the individual person level. In other words, how well can these experts distinguish between different impressions left by different fingers of the same person (e.g., prints from Smith’s right thumb, index, ring, middle or little fingers) and different impressions left by different fingers of different people. In this task, it is impossible to rely on a careful comparison of minutiae in each print because these local features and patterns vary across an individual’s fingers. Despite this variability, even novices performed above chance at distinguishing prints that were different impressions from different fingers of the same individual—and the experts were considerably more accurate than the novices. This example illustrates that there is also global structure distributed across an individual’s fingerprints and that experts have a heightened sensitivity to this global information relative to novices. In casework, fingerprint experts are trained to conduct a detailed analysis of the minutiae in the latent print before comparing it to prints from known individuals (Robson et al., , ), with some employing bias-reduction techniques like linear sequential unmasking to enhance decision accuracy (Dror et al., ). However, the above demonstrations suggest that fingerprint experts are not merely relying on local feature comparisons but are leveraging both global and local processing to achieve their remarkable accuracy. This sensitivity to global structure in prints likely comes about with extensive exposure to prints (Kellman & Garrigan, ; Richler & Palmeri, ). Longitudinal evidence shows that fingerprint trainees get better at discriminating fingerprints and fingerprint patterns as they progress through their on-the-job training to become experts (Searston & Tangen, , ). More recent experimental evidence has also shown that statistical summary information can facilitate perceptual learning in fingerprint examination (Growns et al., , ). This research suggests that experts are drawing on a mental repository of similar prints (Brooks, ; Medin & Schaffer, ) that allows them to build a richer global representation prior to feature segmentation (Oliva & Torralba, )—and that this enriched global impression supports more efficient analysis of the finer details. A heightened sensitivity to global information may also facilitate accurate inferences based on incomplete data. Training to infer missing features from a category instance can result in better transfer to novel situations compared with standard training methods (Jones & Ross, ). Deducing that a bee with pale opalescent blue stripes on its abdomen must have a burrow made of soft stone is an inference that emphasizes commonalities among category members. Conversely, inferring the category label “blue banded bee” from the exemplar emphasizes information distinguishing between categories (Chin-Parker & Ross, ). This sensitivity to visual structure helps experts make accurate inferences even with imperfect visual information—crucial in the context of fingerprint examination. Given the varied conditions under which fingerprint examiners work, they often need to make decisions based on incomplete information. Fingerprint experts sometimes work with pristine, fully rolled prints captured by a computerized fingerprint scanner. At other times, prints can be highly distorted or incomplete. Variation in surface, pressure, movement, skin residue, and even the compounds used to lift or capture a crime-scene (latent) print—such as phosphorescent dye—can affect how a print appears and what aspects of it might be missing. Imagine cradling a glass in your hand and loosening and tightening your grip. If you were to try this exercise, you may notice how different parts of each finger make contact with the glass, and that as you adjust your grip, your skin spreads and folds across the surface. An examiner’s appreciation for the gist of a print, and the redundancies dispersed across it, might help them infer what might be missing in these challenging circumstances. The present experiments test the hypothesis that fingerprint experts can leverage global information to infer missing details in highly distorted or incomplete latent prints more effectively than novices. We designed two experiments to limit participants’ reliance on minutiae when comparing prints: In Experiment 1, participants engage in a Fill-in-the-Fragment task (Fig. ). They must infer the visual detail missing from a blank space cropped from a print, relying solely on the surrounding visual context of the print. This setup assesses their ability to use global context to reconstruct incomplete prints. Experiment 2 employs a Fragment Comparison task. Participants compare small windows or ‘fragments’ of visual detail sampled from different regions (of different impressions) of the same finger, or different regions of a different finger altogether. Here, they must infer the missing visual surrounds of each fragment, further testing their capacity to use global visual patterns for accurate decisions. By comparing the performance of experts and novices across these visual inference tasks, this research aims to understand how fingerprint experts use global information to make accurate decisions. We aim to determine whether their expertise enables them to compensate for missing or obscured minutiae by relying on global visual patterns. This investigation builds on previous studies exploring the role of global or holistic processing in face and scene recognition and seeks to isolate the role of global information in fingerprint examination. In Experiment 1, we tested how well people can infer missing sections from a fingerprint and compared the performance of expert fingerprint examiners to that of novices. Building on Searston and Tangen’s findings—which demonstrated that fingerprint experts can extract global information distributed across different fingers of the same person—we explored whether sensitivity to such distributed information would enable experts to infer missing visual details from a print. Specifically, we examined whether individuals could deduce the correct section of ridge detail missing from a print based on surrounding information—ridge flow, thickness, and friction ridge characteristics dispersed across a fingerprint. To investigate this, we recruited a group of fingerprint experts and an age- and gender-matched group of fingerprint novices to complete a Fill-in-the-Fragment task. Participants had to infer the missing fragment of a fingerprint based on the surrounding context. The critical question was: can people use the global structure or style of a person’s fingerprint to accurately infer a small piece of missing friction ridge skin detail? Participants Sensitivity analysis We conducted a sensitivity analysis based on an estimated sample of 30 experts and 30 novices. Thirty matched expert-novice pairs, each completing 48 trials (totaling 2,880 observations), provided sufficient power (1 − β = 0.82) to detect a moderate difference between experts and novices (Cohen’s d = 0.45). We planned to collect data from as many experts as possible and then test an equal number of novices—as it is often difficult to recruit experts due to their busy schedules. Expert group We collected data from 44 expert fingerprint examiners (25 females, 19 males; median age = 42; min = 29; max = 60) from Australian state and federal police agencies. All experts were qualified, court-practicing fingerprint examiners. These experts completed the two tasks reported in this paper—along with seven other unrelated experimental tasks—in a random order over one or two days during breaks in their casework. The other tasks were unrelated to the research question addressed in this manuscript and have or will be reported elsewhere (e.g., Corbett et al., ; Robson et al., , ). The examiners had an average of 15 years of experience examining fingerprints (min = 5, max = 40). Novice group Novice participants—with no formal experience in fingerprint examination—were recruited from The University of Adelaide, The University of Queensland, and Murdoch University communities. Forty-four novices (25 females, 19 males; median age = 43; min = 26; max = 62) participated for cash payment (AUD$20) and were ‘yoked’ or matched to experts based on age (± 2 years), gender, and level of education. Each expert was paired with a novice counterpart who had the same age, gender, and level of education. Additionally, the novice participants were incentivized to perform to the best of their ability by offered an additional cash payment (AUD$10) if they could exceed the performance of their expert counterpart. Design Task In the ‘Fill-in-the-Fragment’ task, participants were presented with a fingerprint in the center of a computer screen containing a 132 × 132 pixel blank spot (see Fig. ). The aim was to identify the fragment that correctly filled this blank spot from seven fragments displayed at the bottom of the screen. Each trial included one target fragment that corresponded with the blank spot and six distractor fragments from different fingerprints. The target fragment was randomly positioned on each trial—with a long-run probability of 1 in 7 (0.143) for guessing correctly. Pilot We chose seven fragments per trial to maximize variance between novices and experts. This decision was based on a pilot experiment with novices ( N = 8) in which we tested the difficulty of the task with three, five, seven, or nine fragments. Novices correctly selected the target 73% of the time with three options, 59% with five options, 45% with seven options, and 42% with nine options. The task proved challenging with seven or more fragments—as novices made errors on more than half of the trials. Trial sequencing Each participant completed 48 unique trials, each featuring a new fingerprint and corresponding fragments. Novices were presented with the same trial sequences as their expert counterpart, ensuring identical stimuli and order for both groups. This matched-pairs design and method of yoking trial sequences between experts and novices ensures that any observed differences were most likely due to genuine differences in performance rather than variations in the stimuli. Stimuli All fingerprints were sourced from the National Institute of Science and Technology (NIST) Special Database 300 ‘rolled’ set (Fiumara, ). This set—originally donated by the United States Federal Bureau of Investigation—contains 8,871 prints collected in operational policing contexts, preserving their natural variation in quality, completeness, and contextual detail. For this experiment, we used a subset of 1,200 prints, including 10 prints of each finger type (e.g., thumb, index, middle, ring, and little fingers from both hands) from 120 individuals. We standardized the width of all prints to 640 pixels while allowing the height to vary naturally, preserving the original aspect ratio of each print. From each of these standardized prints, a 132 × 132 pixel circular patch of friction ridge detail was removed, creating a set of 1,200 prints with missing fragments and 1,200 corresponding fingerprint fragments for targets and distractors. This fragment size represents approximately 4.25% of the total area of a print with dimensions of 640 × 640 pixels. All other original details in the prints—including natural variation in contrast, hue, and luminance—were left intact. To ensure the task presented a challenge to participants, distractor fragments were extracted from different fingers of the same person, ensuring they were highly similar in overall pattern but different in detail to the target (see Searston & Tangen, ). Each trial involved 48 fingerprints and their corresponding fragments, randomly sampled from one of 120 people. The fingerprint and target fragment were randomly selected from one of the person’s ten finger types (thumb, index, middle, ring, or little finger from either hand), while the distractor fragments were randomly selected from six of the remaining nine finger types. The location of the missing fragment in the print varied from trial to trial, but fragments were systematically extracted by eye from similar parts of the finger to maximize target-distractor similarity on any given trial. For instance, if the target fragment was taken from the top left part of Smith’s left thumb, the distractors were taken from the top left parts of Smith’s other fingers that most closely resembled the target. This procedure ensured that distinctive minutiae between individual prints could not be used to distinguish the fragments. Procedure The task was presented to participants on a 13-inch MacBook computer. Participants first watched an instructional video explaining the task, including examples (see instructional video < https://youtu.be/YpStL-dAtS0 >). Following this, they viewed a total of 48 prints with blank spots, one at a time in sequence. Each trial displayed seven corresponding fragments (one target and six distractors) lined up below the fingerprint. Participants made their choice by clicking on the fragment they believed filled in the blank or missing detail in the print. Immediate feedback was provided—an audible tone and a green checkmark for correct answers, or a red “✕” for incorrect answers. The fingerprint and fragments remained on screen until the participant clicked on one of the seven fragments and during the 500-ms feedback window. There was a 500-ms interval between their response and the next trial. If participants took longer than 15 s to respond, a text prompt appeared during the inter-trial interval, stating: “Please try to make your choice in less than 15 s.” We allowed participants’ response times to vary naturally within this deadline to explore the dynamics of the decision-making process. Hypotheses Humans have an exceptional ability to recognize complex scenes with minimal detail (Navon, ; Oliva & Torralba, ; Searston et al., ). Applying this research to the current fill-in-the-fragment task, we hypothesized that both novices and experts would be able to identify the missing fragment by comparing global image properties with above-chance accuracy. However, given extensive research demonstrating experts’ superior ability to discriminate prints compared to novices—even under conditions with limited time and information (e.g., Searston & Tangen, ; Thompson & Tangen, )—we expected that experts would outperform novices. That is, while novices were expected to perform above chance, the performance of experts was expected to be significantly higher due to their vast exposure to a wide variety of prints and how they tend to look and vary. Sensitivity analysis We conducted a sensitivity analysis based on an estimated sample of 30 experts and 30 novices. Thirty matched expert-novice pairs, each completing 48 trials (totaling 2,880 observations), provided sufficient power (1 − β = 0.82) to detect a moderate difference between experts and novices (Cohen’s d = 0.45). We planned to collect data from as many experts as possible and then test an equal number of novices—as it is often difficult to recruit experts due to their busy schedules. Expert group We collected data from 44 expert fingerprint examiners (25 females, 19 males; median age = 42; min = 29; max = 60) from Australian state and federal police agencies. All experts were qualified, court-practicing fingerprint examiners. These experts completed the two tasks reported in this paper—along with seven other unrelated experimental tasks—in a random order over one or two days during breaks in their casework. The other tasks were unrelated to the research question addressed in this manuscript and have or will be reported elsewhere (e.g., Corbett et al., ; Robson et al., , ). The examiners had an average of 15 years of experience examining fingerprints (min = 5, max = 40). Novice group Novice participants—with no formal experience in fingerprint examination—were recruited from The University of Adelaide, The University of Queensland, and Murdoch University communities. Forty-four novices (25 females, 19 males; median age = 43; min = 26; max = 62) participated for cash payment (AUD$20) and were ‘yoked’ or matched to experts based on age (± 2 years), gender, and level of education. Each expert was paired with a novice counterpart who had the same age, gender, and level of education. Additionally, the novice participants were incentivized to perform to the best of their ability by offered an additional cash payment (AUD$10) if they could exceed the performance of their expert counterpart. Design Task In the ‘Fill-in-the-Fragment’ task, participants were presented with a fingerprint in the center of a computer screen containing a 132 × 132 pixel blank spot (see Fig. ). The aim was to identify the fragment that correctly filled this blank spot from seven fragments displayed at the bottom of the screen. Each trial included one target fragment that corresponded with the blank spot and six distractor fragments from different fingerprints. The target fragment was randomly positioned on each trial—with a long-run probability of 1 in 7 (0.143) for guessing correctly. Pilot We chose seven fragments per trial to maximize variance between novices and experts. This decision was based on a pilot experiment with novices ( N = 8) in which we tested the difficulty of the task with three, five, seven, or nine fragments. Novices correctly selected the target 73% of the time with three options, 59% with five options, 45% with seven options, and 42% with nine options. The task proved challenging with seven or more fragments—as novices made errors on more than half of the trials. Trial sequencing Each participant completed 48 unique trials, each featuring a new fingerprint and corresponding fragments. Novices were presented with the same trial sequences as their expert counterpart, ensuring identical stimuli and order for both groups. This matched-pairs design and method of yoking trial sequences between experts and novices ensures that any observed differences were most likely due to genuine differences in performance rather than variations in the stimuli. Stimuli All fingerprints were sourced from the National Institute of Science and Technology (NIST) Special Database 300 ‘rolled’ set (Fiumara, ). This set—originally donated by the United States Federal Bureau of Investigation—contains 8,871 prints collected in operational policing contexts, preserving their natural variation in quality, completeness, and contextual detail. For this experiment, we used a subset of 1,200 prints, including 10 prints of each finger type (e.g., thumb, index, middle, ring, and little fingers from both hands) from 120 individuals. We standardized the width of all prints to 640 pixels while allowing the height to vary naturally, preserving the original aspect ratio of each print. From each of these standardized prints, a 132 × 132 pixel circular patch of friction ridge detail was removed, creating a set of 1,200 prints with missing fragments and 1,200 corresponding fingerprint fragments for targets and distractors. This fragment size represents approximately 4.25% of the total area of a print with dimensions of 640 × 640 pixels. All other original details in the prints—including natural variation in contrast, hue, and luminance—were left intact. To ensure the task presented a challenge to participants, distractor fragments were extracted from different fingers of the same person, ensuring they were highly similar in overall pattern but different in detail to the target (see Searston & Tangen, ). Each trial involved 48 fingerprints and their corresponding fragments, randomly sampled from one of 120 people. The fingerprint and target fragment were randomly selected from one of the person’s ten finger types (thumb, index, middle, ring, or little finger from either hand), while the distractor fragments were randomly selected from six of the remaining nine finger types. The location of the missing fragment in the print varied from trial to trial, but fragments were systematically extracted by eye from similar parts of the finger to maximize target-distractor similarity on any given trial. For instance, if the target fragment was taken from the top left part of Smith’s left thumb, the distractors were taken from the top left parts of Smith’s other fingers that most closely resembled the target. This procedure ensured that distinctive minutiae between individual prints could not be used to distinguish the fragments. Procedure The task was presented to participants on a 13-inch MacBook computer. Participants first watched an instructional video explaining the task, including examples (see instructional video < https://youtu.be/YpStL-dAtS0 >). Following this, they viewed a total of 48 prints with blank spots, one at a time in sequence. Each trial displayed seven corresponding fragments (one target and six distractors) lined up below the fingerprint. Participants made their choice by clicking on the fragment they believed filled in the blank or missing detail in the print. Immediate feedback was provided—an audible tone and a green checkmark for correct answers, or a red “✕” for incorrect answers. The fingerprint and fragments remained on screen until the participant clicked on one of the seven fragments and during the 500-ms feedback window. There was a 500-ms interval between their response and the next trial. If participants took longer than 15 s to respond, a text prompt appeared during the inter-trial interval, stating: “Please try to make your choice in less than 15 s.” We allowed participants’ response times to vary naturally within this deadline to explore the dynamics of the decision-making process. Hypotheses Humans have an exceptional ability to recognize complex scenes with minimal detail (Navon, ; Oliva & Torralba, ; Searston et al., ). Applying this research to the current fill-in-the-fragment task, we hypothesized that both novices and experts would be able to identify the missing fragment by comparing global image properties with above-chance accuracy. However, given extensive research demonstrating experts’ superior ability to discriminate prints compared to novices—even under conditions with limited time and information (e.g., Searston & Tangen, ; Thompson & Tangen, )—we expected that experts would outperform novices. That is, while novices were expected to perform above chance, the performance of experts was expected to be significantly higher due to their vast exposure to a wide variety of prints and how they tend to look and vary. We conducted a sensitivity analysis based on an estimated sample of 30 experts and 30 novices. Thirty matched expert-novice pairs, each completing 48 trials (totaling 2,880 observations), provided sufficient power (1 − β = 0.82) to detect a moderate difference between experts and novices (Cohen’s d = 0.45). We planned to collect data from as many experts as possible and then test an equal number of novices—as it is often difficult to recruit experts due to their busy schedules. We collected data from 44 expert fingerprint examiners (25 females, 19 males; median age = 42; min = 29; max = 60) from Australian state and federal police agencies. All experts were qualified, court-practicing fingerprint examiners. These experts completed the two tasks reported in this paper—along with seven other unrelated experimental tasks—in a random order over one or two days during breaks in their casework. The other tasks were unrelated to the research question addressed in this manuscript and have or will be reported elsewhere (e.g., Corbett et al., ; Robson et al., , ). The examiners had an average of 15 years of experience examining fingerprints (min = 5, max = 40). Novice participants—with no formal experience in fingerprint examination—were recruited from The University of Adelaide, The University of Queensland, and Murdoch University communities. Forty-four novices (25 females, 19 males; median age = 43; min = 26; max = 62) participated for cash payment (AUD$20) and were ‘yoked’ or matched to experts based on age (± 2 years), gender, and level of education. Each expert was paired with a novice counterpart who had the same age, gender, and level of education. Additionally, the novice participants were incentivized to perform to the best of their ability by offered an additional cash payment (AUD$10) if they could exceed the performance of their expert counterpart. Task In the ‘Fill-in-the-Fragment’ task, participants were presented with a fingerprint in the center of a computer screen containing a 132 × 132 pixel blank spot (see Fig. ). The aim was to identify the fragment that correctly filled this blank spot from seven fragments displayed at the bottom of the screen. Each trial included one target fragment that corresponded with the blank spot and six distractor fragments from different fingerprints. The target fragment was randomly positioned on each trial—with a long-run probability of 1 in 7 (0.143) for guessing correctly. Pilot We chose seven fragments per trial to maximize variance between novices and experts. This decision was based on a pilot experiment with novices ( N = 8) in which we tested the difficulty of the task with three, five, seven, or nine fragments. Novices correctly selected the target 73% of the time with three options, 59% with five options, 45% with seven options, and 42% with nine options. The task proved challenging with seven or more fragments—as novices made errors on more than half of the trials. Trial sequencing Each participant completed 48 unique trials, each featuring a new fingerprint and corresponding fragments. Novices were presented with the same trial sequences as their expert counterpart, ensuring identical stimuli and order for both groups. This matched-pairs design and method of yoking trial sequences between experts and novices ensures that any observed differences were most likely due to genuine differences in performance rather than variations in the stimuli. Stimuli All fingerprints were sourced from the National Institute of Science and Technology (NIST) Special Database 300 ‘rolled’ set (Fiumara, ). This set—originally donated by the United States Federal Bureau of Investigation—contains 8,871 prints collected in operational policing contexts, preserving their natural variation in quality, completeness, and contextual detail. For this experiment, we used a subset of 1,200 prints, including 10 prints of each finger type (e.g., thumb, index, middle, ring, and little fingers from both hands) from 120 individuals. We standardized the width of all prints to 640 pixels while allowing the height to vary naturally, preserving the original aspect ratio of each print. From each of these standardized prints, a 132 × 132 pixel circular patch of friction ridge detail was removed, creating a set of 1,200 prints with missing fragments and 1,200 corresponding fingerprint fragments for targets and distractors. This fragment size represents approximately 4.25% of the total area of a print with dimensions of 640 × 640 pixels. All other original details in the prints—including natural variation in contrast, hue, and luminance—were left intact. To ensure the task presented a challenge to participants, distractor fragments were extracted from different fingers of the same person, ensuring they were highly similar in overall pattern but different in detail to the target (see Searston & Tangen, ). Each trial involved 48 fingerprints and their corresponding fragments, randomly sampled from one of 120 people. The fingerprint and target fragment were randomly selected from one of the person’s ten finger types (thumb, index, middle, ring, or little finger from either hand), while the distractor fragments were randomly selected from six of the remaining nine finger types. The location of the missing fragment in the print varied from trial to trial, but fragments were systematically extracted by eye from similar parts of the finger to maximize target-distractor similarity on any given trial. For instance, if the target fragment was taken from the top left part of Smith’s left thumb, the distractors were taken from the top left parts of Smith’s other fingers that most closely resembled the target. This procedure ensured that distinctive minutiae between individual prints could not be used to distinguish the fragments. Procedure The task was presented to participants on a 13-inch MacBook computer. Participants first watched an instructional video explaining the task, including examples (see instructional video < https://youtu.be/YpStL-dAtS0 >). Following this, they viewed a total of 48 prints with blank spots, one at a time in sequence. Each trial displayed seven corresponding fragments (one target and six distractors) lined up below the fingerprint. Participants made their choice by clicking on the fragment they believed filled in the blank or missing detail in the print. Immediate feedback was provided—an audible tone and a green checkmark for correct answers, or a red “✕” for incorrect answers. The fingerprint and fragments remained on screen until the participant clicked on one of the seven fragments and during the 500-ms feedback window. There was a 500-ms interval between their response and the next trial. If participants took longer than 15 s to respond, a text prompt appeared during the inter-trial interval, stating: “Please try to make your choice in less than 15 s.” We allowed participants’ response times to vary naturally within this deadline to explore the dynamics of the decision-making process. Hypotheses Humans have an exceptional ability to recognize complex scenes with minimal detail (Navon, ; Oliva & Torralba, ; Searston et al., ). Applying this research to the current fill-in-the-fragment task, we hypothesized that both novices and experts would be able to identify the missing fragment by comparing global image properties with above-chance accuracy. However, given extensive research demonstrating experts’ superior ability to discriminate prints compared to novices—even under conditions with limited time and information (e.g., Searston & Tangen, ; Thompson & Tangen, )—we expected that experts would outperform novices. That is, while novices were expected to perform above chance, the performance of experts was expected to be significantly higher due to their vast exposure to a wide variety of prints and how they tend to look and vary. In the ‘Fill-in-the-Fragment’ task, participants were presented with a fingerprint in the center of a computer screen containing a 132 × 132 pixel blank spot (see Fig. ). The aim was to identify the fragment that correctly filled this blank spot from seven fragments displayed at the bottom of the screen. Each trial included one target fragment that corresponded with the blank spot and six distractor fragments from different fingerprints. The target fragment was randomly positioned on each trial—with a long-run probability of 1 in 7 (0.143) for guessing correctly. We chose seven fragments per trial to maximize variance between novices and experts. This decision was based on a pilot experiment with novices ( N = 8) in which we tested the difficulty of the task with three, five, seven, or nine fragments. Novices correctly selected the target 73% of the time with three options, 59% with five options, 45% with seven options, and 42% with nine options. The task proved challenging with seven or more fragments—as novices made errors on more than half of the trials. Each participant completed 48 unique trials, each featuring a new fingerprint and corresponding fragments. Novices were presented with the same trial sequences as their expert counterpart, ensuring identical stimuli and order for both groups. This matched-pairs design and method of yoking trial sequences between experts and novices ensures that any observed differences were most likely due to genuine differences in performance rather than variations in the stimuli. All fingerprints were sourced from the National Institute of Science and Technology (NIST) Special Database 300 ‘rolled’ set (Fiumara, ). This set—originally donated by the United States Federal Bureau of Investigation—contains 8,871 prints collected in operational policing contexts, preserving their natural variation in quality, completeness, and contextual detail. For this experiment, we used a subset of 1,200 prints, including 10 prints of each finger type (e.g., thumb, index, middle, ring, and little fingers from both hands) from 120 individuals. We standardized the width of all prints to 640 pixels while allowing the height to vary naturally, preserving the original aspect ratio of each print. From each of these standardized prints, a 132 × 132 pixel circular patch of friction ridge detail was removed, creating a set of 1,200 prints with missing fragments and 1,200 corresponding fingerprint fragments for targets and distractors. This fragment size represents approximately 4.25% of the total area of a print with dimensions of 640 × 640 pixels. All other original details in the prints—including natural variation in contrast, hue, and luminance—were left intact. To ensure the task presented a challenge to participants, distractor fragments were extracted from different fingers of the same person, ensuring they were highly similar in overall pattern but different in detail to the target (see Searston & Tangen, ). Each trial involved 48 fingerprints and their corresponding fragments, randomly sampled from one of 120 people. The fingerprint and target fragment were randomly selected from one of the person’s ten finger types (thumb, index, middle, ring, or little finger from either hand), while the distractor fragments were randomly selected from six of the remaining nine finger types. The location of the missing fragment in the print varied from trial to trial, but fragments were systematically extracted by eye from similar parts of the finger to maximize target-distractor similarity on any given trial. For instance, if the target fragment was taken from the top left part of Smith’s left thumb, the distractors were taken from the top left parts of Smith’s other fingers that most closely resembled the target. This procedure ensured that distinctive minutiae between individual prints could not be used to distinguish the fragments. The task was presented to participants on a 13-inch MacBook computer. Participants first watched an instructional video explaining the task, including examples (see instructional video < https://youtu.be/YpStL-dAtS0 >). Following this, they viewed a total of 48 prints with blank spots, one at a time in sequence. Each trial displayed seven corresponding fragments (one target and six distractors) lined up below the fingerprint. Participants made their choice by clicking on the fragment they believed filled in the blank or missing detail in the print. Immediate feedback was provided—an audible tone and a green checkmark for correct answers, or a red “✕” for incorrect answers. The fingerprint and fragments remained on screen until the participant clicked on one of the seven fragments and during the 500-ms feedback window. There was a 500-ms interval between their response and the next trial. If participants took longer than 15 s to respond, a text prompt appeared during the inter-trial interval, stating: “Please try to make your choice in less than 15 s.” We allowed participants’ response times to vary naturally within this deadline to explore the dynamics of the decision-making process. Humans have an exceptional ability to recognize complex scenes with minimal detail (Navon, ; Oliva & Torralba, ; Searston et al., ). Applying this research to the current fill-in-the-fragment task, we hypothesized that both novices and experts would be able to identify the missing fragment by comparing global image properties with above-chance accuracy. However, given extensive research demonstrating experts’ superior ability to discriminate prints compared to novices—even under conditions with limited time and information (e.g., Searston & Tangen, ; Thompson & Tangen, )—we expected that experts would outperform novices. That is, while novices were expected to perform above chance, the performance of experts was expected to be significantly higher due to their vast exposure to a wide variety of prints and how they tend to look and vary. The full dataset and accompanying analysis script (an R Notebook) for this experiment are available on the Open Science Framework at: https://osf.io/ndxpc . Proportion correct Experts relative to novices Expert fingerprint examiners demonstrated a higher proportion of correct responses ( M = 0.508, SD = 0.122) compared to novices ( M = 0.450, SD = 0.126; see Fig. ). A paired t -test confirmed this difference, t (43) = 2.295, p = 0.027—indicating that experts performed significantly better than novices. The mean difference was 0.058, with a 95% confidence interval ranging from 0.007 to 0.109, suggesting a moderate (Cohen’s d = 0.47) effect size. Performance relative to chance To further assess performance, we conducted one-sample t-tests comparing the proportion of correct responses of both experts and novices against the chance level of 0.143 (corresponding to a 1 in 7 probability of guessing the correct fragment). Expert performance was significantly above chance, with a mean proportion correct of 0.508 ( SD = 0.122), t (43) = 19.988, p < 001. The 95% confidence interval for expert performance was between 0.470 and 0.545. Similarly, novice performance was also significantly above chance, with a mean proportion correct of 0.450 ( SD = 0.126), t (43) = 16.280, p < 001. The 95% confidence interval for novice performance was between 0.411 and 0.488. These results indicate large effect sizes for both experts (Cohen’s d = 3.01) and novices (Cohen’s d = 2.45) when compared to chance. Response times Response time analysis showed that experts had a mean response time of 11.49 s ( SD = 4.19), while novices had a mean response time of 11.97 s ( SD = 5.19). A paired t -test comparing the response times between experts and novices revealed no significant difference, t (43) = − 0.431, p = 0.668. The mean difference in response times was − 0.476 s, with a 95% confidence interval ranging from − 2.700 to 1.748 s. These results suggest that there was no significant difference in response times between experts and novices—indicating that both groups took a similar amount of time to respond on a given trial. Experiment 2: fragment comparison In Experiment 1, participants could reliably identify missing sections of ridge detail in a fingerprint using the surrounding context of the print. Experts were also more accurate at identifying these missing fragments compared with novices. Since there was no overlapping local information between the fragments and the prints, these findings suggest that participants were using the surrounding visual context of the print to infer the missing local information. However, in this task, the fragments were extracted from the exact same image as the corresponding print. Since participants could mentally trace the ridges from the surrounding context to locate the target, above-chance performance could arise from processing local rather than global information. Experiment 2 addresses this limitation by testing whether experts and novices can use global information when local tracing is impossible. In practice, fingerprint examiners do not ‘match’ images of prints per se; they distinguish between different instances or impressions made by the same finger and those made by different fingers. Fingerprint impressions made by the same finger can vary due to factors such as surface structure, perspiration, contaminants on the skin, skin flexibility, and pressure and movement during impression-making. Likewise, fingerprint impressions made by different fingers can look quite similar, due to the use of computer algorithms to speed up the search for comparison prints in police databases. As such, fingerprint experts do not match images, they match different impressions made by the same finger. In Experiment 2, we tested whether participants could infer the identity of a fingerprint based solely on global information in a task where the corresponding fragments are taken from two different impressions of the same finger. Critically, these fragments were sampled from different local regions of the finger in each impression—such that they shared no overlapping features of friction ridge skin (see Fig. for an example). Whereas participants in Experiment 1 inferred what fragment was missing given the surrounding visual context of a single impression, in Experiment 2 they needed to infer the surrounding visual context from the fragment of a different impression that was sampled from a different region of the finger. Mental tracing is impossible in this task for two reasons. First, the fragments come from different regions of the finger that share no overlapping ridge detail. Second, since the fragments are taken from different impressions at different times, local details can vary due to changes in pressure and other distortions during deposition. Previous research has shown that fingerprint experts can discriminate same-source and different-source fingerprints with high accuracy (Tangen et al., ; Thompson et al., a). In those studies, participants were able to compare overlapping features between two fingerprint impressions to decide if they were made by the same person or finger. In this current experiment, participants were given a small fragment of a fingerprint (132 × 132 pixels) and asked to identify which fragment—out of a lineup of four other fragments—came from the same finger. All but one of these four fragments were sampled from different regions of different fingers of the same individual. The corresponding fragment was sampled from a different impression and a different region of the same finger. We refer to this as the Fragment Comparison task. We examined whether people could discriminate between two types of fragments: those sampled from different regions of different impressions of the same finger, and those sampled from different regions of different impressions of different fingers. This task forces participants to rely on just a small piece of friction ridge skin to infer the global structure or style of a person’s fingerprint. Experts relative to novices Expert fingerprint examiners demonstrated a higher proportion of correct responses ( M = 0.508, SD = 0.122) compared to novices ( M = 0.450, SD = 0.126; see Fig. ). A paired t -test confirmed this difference, t (43) = 2.295, p = 0.027—indicating that experts performed significantly better than novices. The mean difference was 0.058, with a 95% confidence interval ranging from 0.007 to 0.109, suggesting a moderate (Cohen’s d = 0.47) effect size. Performance relative to chance To further assess performance, we conducted one-sample t-tests comparing the proportion of correct responses of both experts and novices against the chance level of 0.143 (corresponding to a 1 in 7 probability of guessing the correct fragment). Expert performance was significantly above chance, with a mean proportion correct of 0.508 ( SD = 0.122), t (43) = 19.988, p < 001. The 95% confidence interval for expert performance was between 0.470 and 0.545. Similarly, novice performance was also significantly above chance, with a mean proportion correct of 0.450 ( SD = 0.126), t (43) = 16.280, p < 001. The 95% confidence interval for novice performance was between 0.411 and 0.488. These results indicate large effect sizes for both experts (Cohen’s d = 3.01) and novices (Cohen’s d = 2.45) when compared to chance. Expert fingerprint examiners demonstrated a higher proportion of correct responses ( M = 0.508, SD = 0.122) compared to novices ( M = 0.450, SD = 0.126; see Fig. ). A paired t -test confirmed this difference, t (43) = 2.295, p = 0.027—indicating that experts performed significantly better than novices. The mean difference was 0.058, with a 95% confidence interval ranging from 0.007 to 0.109, suggesting a moderate (Cohen’s d = 0.47) effect size. To further assess performance, we conducted one-sample t-tests comparing the proportion of correct responses of both experts and novices against the chance level of 0.143 (corresponding to a 1 in 7 probability of guessing the correct fragment). Expert performance was significantly above chance, with a mean proportion correct of 0.508 ( SD = 0.122), t (43) = 19.988, p < 001. The 95% confidence interval for expert performance was between 0.470 and 0.545. Similarly, novice performance was also significantly above chance, with a mean proportion correct of 0.450 ( SD = 0.126), t (43) = 16.280, p < 001. The 95% confidence interval for novice performance was between 0.411 and 0.488. These results indicate large effect sizes for both experts (Cohen’s d = 3.01) and novices (Cohen’s d = 2.45) when compared to chance. Response time analysis showed that experts had a mean response time of 11.49 s ( SD = 4.19), while novices had a mean response time of 11.97 s ( SD = 5.19). A paired t -test comparing the response times between experts and novices revealed no significant difference, t (43) = − 0.431, p = 0.668. The mean difference in response times was − 0.476 s, with a 95% confidence interval ranging from − 2.700 to 1.748 s. These results suggest that there was no significant difference in response times between experts and novices—indicating that both groups took a similar amount of time to respond on a given trial. In Experiment 1, participants could reliably identify missing sections of ridge detail in a fingerprint using the surrounding context of the print. Experts were also more accurate at identifying these missing fragments compared with novices. Since there was no overlapping local information between the fragments and the prints, these findings suggest that participants were using the surrounding visual context of the print to infer the missing local information. However, in this task, the fragments were extracted from the exact same image as the corresponding print. Since participants could mentally trace the ridges from the surrounding context to locate the target, above-chance performance could arise from processing local rather than global information. Experiment 2 addresses this limitation by testing whether experts and novices can use global information when local tracing is impossible. In practice, fingerprint examiners do not ‘match’ images of prints per se; they distinguish between different instances or impressions made by the same finger and those made by different fingers. Fingerprint impressions made by the same finger can vary due to factors such as surface structure, perspiration, contaminants on the skin, skin flexibility, and pressure and movement during impression-making. Likewise, fingerprint impressions made by different fingers can look quite similar, due to the use of computer algorithms to speed up the search for comparison prints in police databases. As such, fingerprint experts do not match images, they match different impressions made by the same finger. In Experiment 2, we tested whether participants could infer the identity of a fingerprint based solely on global information in a task where the corresponding fragments are taken from two different impressions of the same finger. Critically, these fragments were sampled from different local regions of the finger in each impression—such that they shared no overlapping features of friction ridge skin (see Fig. for an example). Whereas participants in Experiment 1 inferred what fragment was missing given the surrounding visual context of a single impression, in Experiment 2 they needed to infer the surrounding visual context from the fragment of a different impression that was sampled from a different region of the finger. Mental tracing is impossible in this task for two reasons. First, the fragments come from different regions of the finger that share no overlapping ridge detail. Second, since the fragments are taken from different impressions at different times, local details can vary due to changes in pressure and other distortions during deposition. Previous research has shown that fingerprint experts can discriminate same-source and different-source fingerprints with high accuracy (Tangen et al., ; Thompson et al., a). In those studies, participants were able to compare overlapping features between two fingerprint impressions to decide if they were made by the same person or finger. In this current experiment, participants were given a small fragment of a fingerprint (132 × 132 pixels) and asked to identify which fragment—out of a lineup of four other fragments—came from the same finger. All but one of these four fragments were sampled from different regions of different fingers of the same individual. The corresponding fragment was sampled from a different impression and a different region of the same finger. We refer to this as the Fragment Comparison task. We examined whether people could discriminate between two types of fragments: those sampled from different regions of different impressions of the same finger, and those sampled from different regions of different impressions of different fingers. This task forces participants to rely on just a small piece of friction ridge skin to infer the global structure or style of a person’s fingerprint. Participants, design and procedure The same participants from Experiment 1—44 experts and 44 age- and gender-matched novices—completed the Fragment Comparison task in Experiment 2. The general procedure was identical to that of Experiment 1. Participants viewed an instructional video (see instructional video < https://youtu.be/HdFf2pzOR2Y >) before completing 48 trials of the Fragment Comparison task. On each trial, a probe fragment from a new print was presented in the center of the computer screen (see Fig. ). The probe was presented along with four other fragments at the bottom of the screen. One of these four fragments came from the same finger as the probe (“target”). The probe and the target fragments were extracted from different parts of different prints left by the same finger. The other three fragments in the lineup were from different fingers of the same individual (“distractors”). The target fragment was randomly positioned among the fragment lineup on each trial, and participants were asked to select the corresponding fragment each time. As in Experiment 1, corrective feedback and a prompt to respond within 15 s were provided on each trial with extended response times. The distractors and targets were extracted from different fingers of the same person, and from the same part of the print on each trial. This procedure further increased the difficulty of the task—as the targets and distractors shared similarities based on the common visual structure present across an individual’s prints (e.g., see Searston & Tangen, ). However, it also enabled us to isolate participants’ ability to identify individual fingers based on global image properties. We prepared trial sequences using different randomization seeds for each of the 44 expert-novice pairs, mirroring the same matched-pairs yoked sequence design as in Experiment 1. Each pair completed an identical trial sequence—ensuring they were perfectly matched on stimuli and order of presentation. Stimuli The materials for Experiment 2 were sourced from the NIST Special Database 300 ‘plain’ and ‘rolled’ sets (Fiumara, ). These sets include fingerprints taken from the same individuals at different times—encompassing 2 × fingerprints × 10 fingers from each donor. We selected four rolled prints and one plain print (or “slap”) from each of 200 donors, resulting in a total of 1,000 prints. The rolled prints were from four different fingers of the same individual donor, and the plain print was randomly chosen from one of these four fingers. From each print, we extracted two fragments: one from the top half and one from the bottom half of the finger. This process yielded 1,600 rolled fragments for targets and distractors and 400 plain fragments for probes. Each fragment was manually cropped to a standardized size of 132 × 132 pixels to ensure that targets and distractors were selected from similar areas of the prints without overlapping with the probe. The probe fragments were randomly chosen from either the top or bottom half of the plain prints. The four other fragments—including the target and three distractors—were sampled from the opposite part of the corresponding rolled prints from the same person. This method ensured that the probes were always taken from different parts of the finger than the target and distractor fragments. The friction ridge skin detailed in the probe fragment did not correspond with those in the target or distractors. Therefore, the probes and target fragments shared no specific minutiae in common. The question is whether the common global characteristics shared between the probe and the target fragments—such as general patterning, direction of ridge flow, ridge thickness, the individual’s general tendencies to apply more or less pressure—are sufficient for identifying prints left by the same finger. Hypotheses Building on the results of Experiment 1—where experts demonstrated superior ability to infer missing ridge details from fingerprints based on global image properties (e.g., ridge flow and patterning)—we hypothesized that in Experiment 2, both novices and experts would be able to identify matching fragments using similar global cues. However, we expected that experts would outperform novices due to their extensive experience with highly variable and impoverished latent impressions. Specifically, we predicted that novices would perform above chance, but experts would achieve significantly higher accuracy due to their exposure to fingerprint patterns and relationships between features. The same participants from Experiment 1—44 experts and 44 age- and gender-matched novices—completed the Fragment Comparison task in Experiment 2. The general procedure was identical to that of Experiment 1. Participants viewed an instructional video (see instructional video < https://youtu.be/HdFf2pzOR2Y >) before completing 48 trials of the Fragment Comparison task. On each trial, a probe fragment from a new print was presented in the center of the computer screen (see Fig. ). The probe was presented along with four other fragments at the bottom of the screen. One of these four fragments came from the same finger as the probe (“target”). The probe and the target fragments were extracted from different parts of different prints left by the same finger. The other three fragments in the lineup were from different fingers of the same individual (“distractors”). The target fragment was randomly positioned among the fragment lineup on each trial, and participants were asked to select the corresponding fragment each time. As in Experiment 1, corrective feedback and a prompt to respond within 15 s were provided on each trial with extended response times. The distractors and targets were extracted from different fingers of the same person, and from the same part of the print on each trial. This procedure further increased the difficulty of the task—as the targets and distractors shared similarities based on the common visual structure present across an individual’s prints (e.g., see Searston & Tangen, ). However, it also enabled us to isolate participants’ ability to identify individual fingers based on global image properties. We prepared trial sequences using different randomization seeds for each of the 44 expert-novice pairs, mirroring the same matched-pairs yoked sequence design as in Experiment 1. Each pair completed an identical trial sequence—ensuring they were perfectly matched on stimuli and order of presentation. Stimuli The materials for Experiment 2 were sourced from the NIST Special Database 300 ‘plain’ and ‘rolled’ sets (Fiumara, ). These sets include fingerprints taken from the same individuals at different times—encompassing 2 × fingerprints × 10 fingers from each donor. We selected four rolled prints and one plain print (or “slap”) from each of 200 donors, resulting in a total of 1,000 prints. The rolled prints were from four different fingers of the same individual donor, and the plain print was randomly chosen from one of these four fingers. From each print, we extracted two fragments: one from the top half and one from the bottom half of the finger. This process yielded 1,600 rolled fragments for targets and distractors and 400 plain fragments for probes. Each fragment was manually cropped to a standardized size of 132 × 132 pixels to ensure that targets and distractors were selected from similar areas of the prints without overlapping with the probe. The probe fragments were randomly chosen from either the top or bottom half of the plain prints. The four other fragments—including the target and three distractors—were sampled from the opposite part of the corresponding rolled prints from the same person. This method ensured that the probes were always taken from different parts of the finger than the target and distractor fragments. The friction ridge skin detailed in the probe fragment did not correspond with those in the target or distractors. Therefore, the probes and target fragments shared no specific minutiae in common. The question is whether the common global characteristics shared between the probe and the target fragments—such as general patterning, direction of ridge flow, ridge thickness, the individual’s general tendencies to apply more or less pressure—are sufficient for identifying prints left by the same finger. Hypotheses Building on the results of Experiment 1—where experts demonstrated superior ability to infer missing ridge details from fingerprints based on global image properties (e.g., ridge flow and patterning)—we hypothesized that in Experiment 2, both novices and experts would be able to identify matching fragments using similar global cues. However, we expected that experts would outperform novices due to their extensive experience with highly variable and impoverished latent impressions. Specifically, we predicted that novices would perform above chance, but experts would achieve significantly higher accuracy due to their exposure to fingerprint patterns and relationships between features. The materials for Experiment 2 were sourced from the NIST Special Database 300 ‘plain’ and ‘rolled’ sets (Fiumara, ). These sets include fingerprints taken from the same individuals at different times—encompassing 2 × fingerprints × 10 fingers from each donor. We selected four rolled prints and one plain print (or “slap”) from each of 200 donors, resulting in a total of 1,000 prints. The rolled prints were from four different fingers of the same individual donor, and the plain print was randomly chosen from one of these four fingers. From each print, we extracted two fragments: one from the top half and one from the bottom half of the finger. This process yielded 1,600 rolled fragments for targets and distractors and 400 plain fragments for probes. Each fragment was manually cropped to a standardized size of 132 × 132 pixels to ensure that targets and distractors were selected from similar areas of the prints without overlapping with the probe. The probe fragments were randomly chosen from either the top or bottom half of the plain prints. The four other fragments—including the target and three distractors—were sampled from the opposite part of the corresponding rolled prints from the same person. This method ensured that the probes were always taken from different parts of the finger than the target and distractor fragments. The friction ridge skin detailed in the probe fragment did not correspond with those in the target or distractors. Therefore, the probes and target fragments shared no specific minutiae in common. The question is whether the common global characteristics shared between the probe and the target fragments—such as general patterning, direction of ridge flow, ridge thickness, the individual’s general tendencies to apply more or less pressure—are sufficient for identifying prints left by the same finger. Building on the results of Experiment 1—where experts demonstrated superior ability to infer missing ridge details from fingerprints based on global image properties (e.g., ridge flow and patterning)—we hypothesized that in Experiment 2, both novices and experts would be able to identify matching fragments using similar global cues. However, we expected that experts would outperform novices due to their extensive experience with highly variable and impoverished latent impressions. Specifically, we predicted that novices would perform above chance, but experts would achieve significantly higher accuracy due to their exposure to fingerprint patterns and relationships between features. The data analysis plan and workflow was the same as Experiment 1, and the data and script are available at: https://osf.io/ndxpc . Experts relative to novices Expert fingerprint examiners demonstrated a higher proportion of correct responses ( M = 0.358, SD = 0.093) compared to novices ( M = 0.305, SD = 0.073; see Fig. ). A paired t -test confirmed this difference, t (43) = 2.775, p = 0.008—indicating that experts performed significantly better than novices. The mean difference was 0.054, with a 95% confidence interval ranging from 0.015 to 0.092, suggesting a moderate effect size (Cohen’s d = 0.64). Performance relative to chance As in Experiment 1, we conducted one-sample t -tests comparing the proportion of correct responses of both experts and novices against the chance level of 0.25 (corresponding to a 1 in 4 probability of guessing the correct fragment). Expert performance was significantly above chance, with a mean proportion correct of 0.358 ( SD = 0.093), t (43) = 15.65, p < 0.001. The 95% confidence interval for expert performance was between 0.330 and 0.387. Similarly, novice performance was also significantly above chance, with a mean proportion correct of 0.305 ( SD = 0.073), t (43) = 14.955, p < 0.001. The 95% confidence interval for novice performance was between 0.283 and 0.327. These results indicate large effect sizes for both experts (Cohen’s d = 2.36) and novices (Cohen’s d = 2.26) when compared to chance. Response times Response time data showed that experts had a mean response time of 8.34 s ( SD = 4.59) on the Fragment Task, while novices had a mean response time of 8.17 s ( SD = 3.75). A paired t-test comparing the response times between experts and novices revealed no significant difference, t (43) = 0.173, p = 0.864. The mean difference in response times was 0.167 s, with a 95% confidence interval ranging from − 1.785 to 2.119 s. These results suggest that there was no significant difference in response times between experts and novices—indicating that both groups took a similar amount of time to respond on a given trial. General discussion We conducted two experiments examining how fingerprint experts and novices use global visual information to make accurate inferences about missing details in fingerprints. Specifically, we aimed to determine whether experts, compared to novices, could more effectively use surrounding visual context to infer missing ridge detail in degraded or incomplete fingerprints. Our results show that while both groups can perform above chance, experts consistently outperform novices—demonstrating a heightened sensitivity to global image properties that enhances their ability to process incomplete or degraded prints. In Experiment 1, participants could reliably identify missing sections of ridge detail by using the surrounding context, with experts showing significantly higher accuracy than novices. This supports the idea that experts have developed a refined sensitivity to the overall structure and pattern of fingerprints through extensive experience. This refined ability to quickly glean the gist of a print provides a foundation for accurate inferences about missing details. Previous research supports this interpretation, as experts in various domains demonstrate an ability to leverage global visual information for accurate and rapid decision-making (Brennan et al., ; Nodine et al., ; Oliva & Torralba, ). This ability to integrate global information aligns with mechanisms of holistic processing described in the face recognition literature, where the spatial configuration of features is perceived as an integrated whole, facilitating rapid and accurate decisions (Belanova et al., ; Chua & Gauthier, ). Experiment 2 extended these findings by testing whether participants could discriminate fingerprint fragments sampled from different impressions of the same finger. Critically, these fragments were also sampled from different regions of the different impressions—such that they shared no overlapping local features of friction ridge skin, unlike a typical fingerprint matching task. Experts again outperformed novices, demonstrating their superior ability to use global image properties to draw inferences even when information is limited, and local minutiae are unavailable. This further supports the hypothesis that experts rely on a rich mental repository of fingerprint patterns, enabling them to build a global representation of a print that aids in accurate comparison decisions (Brooks, ; Medin & Schaffer, ). While performance was significantly above chance in both experiments, it was generally quite poor relative to other fingerprint matching experiments (e.g., Thompson, et al., b). This outcome is not surprising given the challenging nature of the task—the fragments were small (132 × 132 pixels), and the distractors were highly similar to the target fragments, all sampled from different regions of the same individual’s fingerprints. However, the generally poor performance indicates that more information and time to conduct a detailed analysis of minutiae is also critical to making accurate comparison decisions (Robson et al., ). These findings align with the idea that fingerprint experts heavily rely on local or part-based processing, particularly when local minutiae provide diagnostic features (Vogelsang, Palmeri, & Busey, ). However, the persistence of expert-novice differences in the absence of local diagnostic information suggests that experts may be able to switch between global (holistic) and local (part-based) mechanisms depending on the visual context of the case. Much like scene and face recognition, holistic impressions of fingerprints may guide attention to key local features, enabling experts to balance efficiency and accuracy in their interpretation of visual evidence. In general, our findings complement previous research showing that perceptual expertise involves the ability to quickly and accurately process global visual information (Busey & Vanderkolk, ; Thompson & Tangen, ; Thompson et al., ). We also add to existing research on scene and face recognition by demonstrating that a capacity to extract global visual information can support accurate decision-making in a complex visual comparison task. Our findings show that this ability is not limited to natural scenes and faces but extends to the specialized expert domain of fingerprint examination—where examiners must often make decisions based on incomplete or degraded prints. This insight is relevant to a range of contexts where information is compromised, such as when radiologists detect abnormalities in low-resolution medical images (Boita et al., ), police identify suspects from blurry surveillance footage (Burton et al., ), or remote sensing experts interpret satellite imagery when images are affected by atmospheric interference or resolution limitations (Ahn et al., ). Future research may wish to explore the generality of our findings to such contexts. The current experiments also extend on prior studies showing that fingerprint experts can infer the identity of a print by comparing impressions of different fingers from the same person (Searston & Tangen, ). Perceptual expertise in fingerprint examination appears not to rely solely on detecting and comparing local features (Hicklin et al., ; Robson et al., , ), but also on information distributed across a print and between different prints. This idea is similar to other empirical findings suggesting that expertise in fingerprint examination rests partly on sensitivity to global or holistic information (Busey & Vanderkolk, ; Busey & Parada, 2009; Thompson & Tangen, ). For example, Thompson and Tangen showed that fingerprint experts can accurately match prints clouded in noise or prints presented only very briefly. A careful comparison of local features cannot explain these expert-novice differences. Moreover, our research suggests that while novices can perform above chance in these tasks, expertise substantially enhances the ability to use global visual information for accurate fingerprint comparison decisions. This highlights the importance of experience and extensive exposure to a wide variety of prints in developing the perceptual skills needed for expert performance (Searston & Tangen, , ). While our findings provide insight into the perceptual mechanisms underlying expertise in fingerprint examination, they should not be taken as evidence that examiners rely on inferred details in operational settings. They should also not be used as a validation of expert performance under challenging casework conditions in forensic reporting or court testimony. Instead, these results highlight how developing a sensitivity to global image properties might support fingerprint comparison decisions under controlled conditions, contributing to our general understanding of perceptual expertise. Future research should examine how global and local processing work together in expert decision-making, and test training methods that develop both abilities in novice analysts (see Growns et al., ; Robson et al., ; Searston et al., for examples of effective training). Additionally, examining how experts integrate global and local information under different conditions could provide deeper insights into the cognitive mechanisms underlying fingerprint expertise (Robson et al., ). In conclusion, our study provides evidence that fingerprint expertise involves leveraging global visual information alongside local minutiae. Under controlled experimental conditions, experts demonstrated superior ability to leverage global properties of fingerprints—such as ridge flow patterns—to make accurate comparisons. This heightened sensitivity to global patterns may guide experts’ attention to relevant local features, enabling more efficient and accurate detailed analysis. Although these results advance our theoretical understanding of perceptual expertise, we emphasize that they should not necessarily be used to inform or validate operational fingerprint examination procedures. Rather, these findings further reveal the perceptual mechanisms that characterize expert performance in high-stakes visual comparison domains, from fingerprint examination to medical image interpretation. What emerges is a defining feature of perceptual expertise: the ability to rapidly process global visual information while maintaining precise attention to local detail. Expert fingerprint examiners demonstrated a higher proportion of correct responses ( M = 0.358, SD = 0.093) compared to novices ( M = 0.305, SD = 0.073; see Fig. ). A paired t -test confirmed this difference, t (43) = 2.775, p = 0.008—indicating that experts performed significantly better than novices. The mean difference was 0.054, with a 95% confidence interval ranging from 0.015 to 0.092, suggesting a moderate effect size (Cohen’s d = 0.64). As in Experiment 1, we conducted one-sample t -tests comparing the proportion of correct responses of both experts and novices against the chance level of 0.25 (corresponding to a 1 in 4 probability of guessing the correct fragment). Expert performance was significantly above chance, with a mean proportion correct of 0.358 ( SD = 0.093), t (43) = 15.65, p < 0.001. The 95% confidence interval for expert performance was between 0.330 and 0.387. Similarly, novice performance was also significantly above chance, with a mean proportion correct of 0.305 ( SD = 0.073), t (43) = 14.955, p < 0.001. The 95% confidence interval for novice performance was between 0.283 and 0.327. These results indicate large effect sizes for both experts (Cohen’s d = 2.36) and novices (Cohen’s d = 2.26) when compared to chance. Response time data showed that experts had a mean response time of 8.34 s ( SD = 4.59) on the Fragment Task, while novices had a mean response time of 8.17 s ( SD = 3.75). A paired t-test comparing the response times between experts and novices revealed no significant difference, t (43) = 0.173, p = 0.864. The mean difference in response times was 0.167 s, with a 95% confidence interval ranging from − 1.785 to 2.119 s. These results suggest that there was no significant difference in response times between experts and novices—indicating that both groups took a similar amount of time to respond on a given trial. We conducted two experiments examining how fingerprint experts and novices use global visual information to make accurate inferences about missing details in fingerprints. Specifically, we aimed to determine whether experts, compared to novices, could more effectively use surrounding visual context to infer missing ridge detail in degraded or incomplete fingerprints. Our results show that while both groups can perform above chance, experts consistently outperform novices—demonstrating a heightened sensitivity to global image properties that enhances their ability to process incomplete or degraded prints. In Experiment 1, participants could reliably identify missing sections of ridge detail by using the surrounding context, with experts showing significantly higher accuracy than novices. This supports the idea that experts have developed a refined sensitivity to the overall structure and pattern of fingerprints through extensive experience. This refined ability to quickly glean the gist of a print provides a foundation for accurate inferences about missing details. Previous research supports this interpretation, as experts in various domains demonstrate an ability to leverage global visual information for accurate and rapid decision-making (Brennan et al., ; Nodine et al., ; Oliva & Torralba, ). This ability to integrate global information aligns with mechanisms of holistic processing described in the face recognition literature, where the spatial configuration of features is perceived as an integrated whole, facilitating rapid and accurate decisions (Belanova et al., ; Chua & Gauthier, ). Experiment 2 extended these findings by testing whether participants could discriminate fingerprint fragments sampled from different impressions of the same finger. Critically, these fragments were also sampled from different regions of the different impressions—such that they shared no overlapping local features of friction ridge skin, unlike a typical fingerprint matching task. Experts again outperformed novices, demonstrating their superior ability to use global image properties to draw inferences even when information is limited, and local minutiae are unavailable. This further supports the hypothesis that experts rely on a rich mental repository of fingerprint patterns, enabling them to build a global representation of a print that aids in accurate comparison decisions (Brooks, ; Medin & Schaffer, ). While performance was significantly above chance in both experiments, it was generally quite poor relative to other fingerprint matching experiments (e.g., Thompson, et al., b). This outcome is not surprising given the challenging nature of the task—the fragments were small (132 × 132 pixels), and the distractors were highly similar to the target fragments, all sampled from different regions of the same individual’s fingerprints. However, the generally poor performance indicates that more information and time to conduct a detailed analysis of minutiae is also critical to making accurate comparison decisions (Robson et al., ). These findings align with the idea that fingerprint experts heavily rely on local or part-based processing, particularly when local minutiae provide diagnostic features (Vogelsang, Palmeri, & Busey, ). However, the persistence of expert-novice differences in the absence of local diagnostic information suggests that experts may be able to switch between global (holistic) and local (part-based) mechanisms depending on the visual context of the case. Much like scene and face recognition, holistic impressions of fingerprints may guide attention to key local features, enabling experts to balance efficiency and accuracy in their interpretation of visual evidence. In general, our findings complement previous research showing that perceptual expertise involves the ability to quickly and accurately process global visual information (Busey & Vanderkolk, ; Thompson & Tangen, ; Thompson et al., ). We also add to existing research on scene and face recognition by demonstrating that a capacity to extract global visual information can support accurate decision-making in a complex visual comparison task. Our findings show that this ability is not limited to natural scenes and faces but extends to the specialized expert domain of fingerprint examination—where examiners must often make decisions based on incomplete or degraded prints. This insight is relevant to a range of contexts where information is compromised, such as when radiologists detect abnormalities in low-resolution medical images (Boita et al., ), police identify suspects from blurry surveillance footage (Burton et al., ), or remote sensing experts interpret satellite imagery when images are affected by atmospheric interference or resolution limitations (Ahn et al., ). Future research may wish to explore the generality of our findings to such contexts. The current experiments also extend on prior studies showing that fingerprint experts can infer the identity of a print by comparing impressions of different fingers from the same person (Searston & Tangen, ). Perceptual expertise in fingerprint examination appears not to rely solely on detecting and comparing local features (Hicklin et al., ; Robson et al., , ), but also on information distributed across a print and between different prints. This idea is similar to other empirical findings suggesting that expertise in fingerprint examination rests partly on sensitivity to global or holistic information (Busey & Vanderkolk, ; Busey & Parada, 2009; Thompson & Tangen, ). For example, Thompson and Tangen showed that fingerprint experts can accurately match prints clouded in noise or prints presented only very briefly. A careful comparison of local features cannot explain these expert-novice differences. Moreover, our research suggests that while novices can perform above chance in these tasks, expertise substantially enhances the ability to use global visual information for accurate fingerprint comparison decisions. This highlights the importance of experience and extensive exposure to a wide variety of prints in developing the perceptual skills needed for expert performance (Searston & Tangen, , ). While our findings provide insight into the perceptual mechanisms underlying expertise in fingerprint examination, they should not be taken as evidence that examiners rely on inferred details in operational settings. They should also not be used as a validation of expert performance under challenging casework conditions in forensic reporting or court testimony. Instead, these results highlight how developing a sensitivity to global image properties might support fingerprint comparison decisions under controlled conditions, contributing to our general understanding of perceptual expertise. Future research should examine how global and local processing work together in expert decision-making, and test training methods that develop both abilities in novice analysts (see Growns et al., ; Robson et al., ; Searston et al., for examples of effective training). Additionally, examining how experts integrate global and local information under different conditions could provide deeper insights into the cognitive mechanisms underlying fingerprint expertise (Robson et al., ). In conclusion, our study provides evidence that fingerprint expertise involves leveraging global visual information alongside local minutiae. Under controlled experimental conditions, experts demonstrated superior ability to leverage global properties of fingerprints—such as ridge flow patterns—to make accurate comparisons. This heightened sensitivity to global patterns may guide experts’ attention to relevant local features, enabling more efficient and accurate detailed analysis. Although these results advance our theoretical understanding of perceptual expertise, we emphasize that they should not necessarily be used to inform or validate operational fingerprint examination procedures. Rather, these findings further reveal the perceptual mechanisms that characterize expert performance in high-stakes visual comparison domains, from fingerprint examination to medical image interpretation. What emerges is a defining feature of perceptual expertise: the ability to rapidly process global visual information while maintaining precise attention to local detail. |
ADNP dysregulates methylation and mitochondrial gene expression in the cerebellum of a Helsmoortel–Van der Aa syndrome autopsy case | 375cdb8c-fdd0-4704-b1bb-a50d543a3cf1 | 11027339 | Forensic Medicine[mh] | The development of whole exome sequencing (WES) has substantially increased our insights in the genetic causes of neurodevelopmental disorders by detection of de novo mutations by comparing the exome of the proband to that of its parents . Using this method, mutations in the Activity Dependent-Neuroprotective Homeobox Protein ( ADNP ) gene have been discovered, contributing to a neurogenetic syndrome called Helsmoortel–Van der Aa syndrome ( OMIM 615873 ), with a prevalence of 0.2% of global autism cases . Patients show a clinical presentation of mild to severe intellectual disability (ID), autism spectrum disorder (ASD), global developmental delay (GDD), motor and speech delay, behavior abnormalities and deficiencies in several organ systems such as gastrointestinal problems . In recent screening studies, ADNP appears one of the most frequently mutated genes with a hundred percent disease penetrance . While massive screening studies now have cumulated in the discovery of over a thousand genes that are involved in ID and/or ASD, our molecular and functional understanding of the pathophysiology of these genes is lagging far behind. For instance, despite a wealth of information, many biochemical aspects of the function of ADNP in the brain remain unknown . Spanning a genomic length of almost 40 kb, the longest transcript of the ADNP gene contains six exons of which only the last three translate to the actual protein . Functional domains of ADNP include nine zinc fingers, a bipartite nuclear localization signal (NLS), a homeobox domain with ARKS motif, a heterochromatin protein 1 (HP1)-interacting PxVxL motif, and the neuroprotective octapeptide sequence NAP VSIPQ (NAP) . In the nucleus, ADNP plays a role in chromatin remodeling: it binds directly to other chromatin remodelers, including the BAF complex members BRG1, ARID1A, and SMARCC2 by its C-terminal tail as demonstrated in a HEK293 human embryonic kidney cell line , and to CHD4 by its N-terminus as well as HP1β by its C-terminus in the repressive ChAHP complex discovered in murine embryonic stem cells, where it competes with CTCF for a common set of binding motifs . Besides, a stable triplex of ADNP, BRG1 and CHD4 was also reported in murine stem cells , while POGZ and HP1γ form a nuclear complex with ADNP in the embryonic mouse cortex . Most recently, ADNP was predicted to interact with the WRD5-SIRT1-BRG1-HDAC2 including YY1 complex . Although the involvement of ADNP in chromatin remodeling functions has been firmly established, the role of these protein complexes in the human brain remains to be determined. In terms of function, ADNP is involved in neuronal tube closure and brain development, controlling expression of hundreds if not thousands of genes . The chromatin function of ADNP is reflected by specific, aberrant methylation patterns in the blood of patients. In fact and almost unique to ADNP, two partially opposing methylation patterns have been described, depending on the location of the mutation. Whereas mutations located at the 3′-end and 5′-end of the ADNP gene (outside of nucleotides c.2000–2340) represent a Class I episignature with a pattern of overall hypomethylated CpGs, mutations in the central region (within nucleotides 2000–2340) of the gene show rather CpG hypermethylation . Interestingly, the hypermethylated region, encompassing the recurrent p.Tyr719* ADNP mutation, is associated with a more severe clinical presentation . Cytoplasmic roles for ADNP have also been suggested e.g., involvement in autophagy by binding LC3 , and interactions with the cytoskeleton via the microtubule end-binding proteins (EB1/EB3) , with Adnp deficiency resulting in impaired axonal transport and impaired dendritic spines . Additionally, ADNP interacts with other cytoskeletal proteins such as SHANK3 and actin as well as with the armadillo sequence of beta-catenin, important for WNT signaling . However, none of the above-mentioned studies have been performed in disease-relevant tissue. Instead, immortalized human cell lines, murine tissues, embryonic stem cells or other model systems have been investigated. Here, we present a unique case study on autopsy material of a six-year-old child with the heterozygous c.1676dupA/p.His559Glnfs*3 de novo ADNP mutation . By combining in-depth epigenetic, transcriptomic, and proteomic studies in the cerebellum of this post-mortem ADNP subject, we were able to confirm the involvement of pathways such as the WNT-signaling in the Helsmoortel–Van der Aa syndrome as well as to demonstrate ADNP involvement in autophagy and mitochondrial (dys)function(s). Post-mortem tissues and subjects Clinical information of a nine-year-old female patient was obtained under informed written consent from the Institute Born-Bunge vzw IBB NeuroBioBank of the University of Antwerp and transferred with written informed consent under HMTA20210040 after approval of the Ethics Committee of the Antwerp University Hospital/University of Antwerp. The female subject, used as a control in this study, showed symptoms analogous to a sporadic form of Rett syndrome and died following obstructive apnea. Twelve hours after death, cerebellar tissue was collected during the autopsy and frozen in liquid nitrogen or fixed in formaldehyde. Frozen section, paraffin sections, and celloidin embeddings were extensively investigated by an expert pathologist, resulting in no morphological abnormalities in all brain regions, except some fibrillar gliosis in the hippocampus. Comparisons to other sections of an age-matched control showed similar cytology and no neuronal loss. The substantia nigra contained normal amounts of melanin granules and the cerebellum contained no loss of Purkinje cells. Clinical information of a dead six-year-old male patient with the heterozygous c.1676dupA/p.His559Glnfs*3 ADNP mutation was received under informed consent under B300201627322 and approved by the Ethics Committee of the Antwerp University Hospital. The patient died because of multiple organ failure. Cerebellar tissue was collected during autopsy following a 35-h post-mortem interval, subsequently frozen in liquid nitrogen or fixed in formaldehyde. Clinical evaluation was performed by at least one expert clinical geneticist. The ADNP mutation was confirmed by Sanger sequencing using the forward primer 3′-TGATGTGCAAGTGCATCAGA-5′ and reverse primer 3′-TGTGCACTTCGAAAAAGAACAT-5′. Conservation of the amino acids changed by the ADNP mutation was verified using ClustalW. Plasmid constructs and site-directed mutagenesis The pCMV3 expression vector encoding human wild-type ADNP fused to either an N-terminal GFPSpark® or N-DYKDDDDK (Flag®) tag was purchased from Sino Biological ( HG11106-ANG ; HG11106-NF ). The c.1676duplA mutation was introduced in the N-DYKDDDDK (Flag®) ADNP expression vector by PCR mutagenesis using the Q5® Site-Directed mutagenesis kit (New England Biolabs; E0554S ) according to manufacturer’s protocol. Mutagenesis primers were designed using the NEBaseChanger Tool ( http://nebasechanger.neb.com/ ). The mutation was inserted with the forward primer: 5′-ACACTAACATCCATCTCCTG-3’ and the reverse primer: 5′-TGACTACCCTGCTGCAAT-3′ by thermocycling with an annealing temperature of 60 °C. DNA was purified from transformed high-efficiency NEB 5-alpha competent E. coli cells using the NucleoSpin Plasmid EasyPure Mini kit (Macherey Nagel; 740727.50 ) according to the manual. The mutation was confirmed by Sanger sequencing. Cell lines and culture conditions HEK293T cells (ATCC; CRL-3216™ ) were cultured at low passage number in DMEM (Gibco™; 11965092 ), supplemented with 10% fetal bovine serum (Gibco™; 26140079 ) and 1% penicillin/streptomycin (Gibco™; 15070063 ). Age- and sex matched Epstein-Barr virus transformed lymphoblastoid cell lines (LCLs) of healthy subjects (n = 4) and patients with different ADNP mutations (n = 6) (Additional file : Table S1) were cultured in RPMI (Gibco™; A1049101), supplemented with 15% fetal bovine serum (Gibco™; 26140079), 1% penicillin/streptomycin (Gibco™; 15070063), 1% sodium pyruvate (Gibco™; 11360070), and 1% GlutaMAX (Gibco™; 35050061). Age- and sex matched skin fibroblasts of two unrelated asymptomatic subjects and two patients with different ADNP mutations (n = 2) (Additional file : Table S1) were cultured in RPMI (Gibco™; A1049101), supplemented with 15% fetal bovine serum (Gibco™; 26140079), 1% penicillin/streptomycin (Gibco™; 15070063), 1% sodium pyruvate (Gibco™; 11360070), and 1% GlutaMAX (Gibco™; 35050061). Human primary cell lines were obtained from consenting individuals, guardians, tending clinicians, or parents. All procedures were carried out following the guidelines and regulations of the University of Antwerp/University Hospital of Antwerp (UZA) and approved by the Ethics Committee of the Antwerp University Hospital. All cell lines were cultured in a humidified incubator at 37%O 2 /5%CO 2 . AlfaFold 3D-structural protein modeling The predicted 3D-structure of human wild-type ADNP (Uniprot; Q9H2P0) was acquired using the AlfaFold Protein Structure Database ( https://alphafold.ebi.ac.uk/ ). The p.His559Glnfs*3 mutant was queried in the amino acid sequence of wild-type ADNP. Wild-type and mutant ADNP proteins were modeled using AlfaFold2 with ColabFold ( https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb ), an online integrating AlfaFold2 pipeline for protein structure modeling combined with many-against-many sequence searching (MMSeqs2), and HHSearch. ChimeraX (UCSF, version 1.5) was used for visualization and annotation of the structural ADNP protein domains using the generated PBD output file as input. Cellular ADNP transfection system HEK293T were transiently transfected with 5 µg of human expression vectors: (1) wild-type ADNP with an N-terminal GFPSpark®-tag, (2) wild-type ADNP with an N-terminal DYKDDDDK (Flag®)-tag, or (3) mutant c.1676dupA ADNP fused to an N-terminal DYKDDDDK (Flag®)-tag using Lipofectamine™ 3000 Transfection Reagent (Invitrogen; L3000008 ) in accordance with the manufacturer’s protocol. Co-transfections were performed with equal amounts of both wild-type and mutant ADNP expression vectors. Transfection efficiency was about 70% in line with the manufacturer’s tested performance. Cells were harvested after 24-h for subcellular protein fractionation followed by western blotting. ADNP expression analysis: total protein extraction and subcellular protein fractionation After transient transfection of the ADNP expression vector of interest, HEK293T cells were detached with TrypLE™ Express Enzyme (1X), phenol red (Gibco™; 12605028 ), subsequently washed with ice-cold DPBS (Gibco™; 14040133 ). Cerebellar tissue obtained from the post-mortem control subject and died ADNP patient was homogenized with the TissueRuptor II (Qiagen; 9002755 ) with mixing at the lowest speed. For total protein extraction, cells and tissue were lysed in ice cold RIPA buffer (150 mM NaCl, 50 mM Tris, 0.5% sodium deoxycholate, 1% NP-40 and 2% sodium dodecyl sulfate), supplemented with the cOmplete™, Mini, EDTA-free Protease Inhibitor Cocktail (Roche; 04693159001 ) together with PhosSTOP™ phosphatase inhibitor (Roche; 4906845001). Lysis occurred for 15 min at 4 °C with agitation and cell debris was removed by centrifuging 15 min at maximal speed in a precooled centrifuge. For subcellular fractionation of transfected HEK293T cells, a final amount of 10 × 10 6 cells was lysed and gradually separated in cytoplasmic, membrane, nuclear soluble, chromatin-bound, and cytoskeletal protein extracts using the Subcellular Protein Fractionation Kit for Cultured Cells (Thermo Scientific™; 78840 ) following the manufacturer’s instructions. The protein concentration was estimated with the Pierce™ BCA Protein Assay Kit (Thermo Scientific™; 23225 ). ADNP expression was investigated in the cytoplasmic, chromatin-bound, and cytoskeletal protein fractions. Immunoblotting A total amount of 20 μg protein lysate was reduced with NuPAGE™ Sample Reducing Agent (Invitrogen; NP0009 ) in NuPAGE™ LDS Sample Buffer (Invitrogen; NP0007 ). Samples were heated for 10 min at 70 °C and subsequently loaded for separation using a Bolt™ 4 to 12%, Bis–Tris, 1.0 mm, Mini Protein Gels (Invitrogen; NW04120BOX ) using Bolt™ MOPS SDS Running Buffer (Invitrogen; B0001 ) at 120 V. The Precision Plus Protein™ All Blue Prestained Protein Standard (Biorad; #1610373 ) was used for estimation of the molecular weight in all experiments. After separation, proteins were transblotted onto Amersham™ Protran® Premium nitrocellulose membranes (Cytiva; GE10600008 ) using a Mini Trans-Blot® cell (Biorad; 1703930 ) with a transfer buffer containing 25 mM Tris, 192 mM glycine and 20% methanol (pH 8.3). Successful protein transfer was checked with a Ponceau S solution (Sigma Aldrich; P7170 ). Nitrocellulose membranes were blocked with either 5% blocking-grade non-fat dry milk (NFDM) (Carl Roth; T145.4 ) or 5% bovine serum albumin (BSA) (Carl Roth; CP84.1 ) dissolved in tris-buffered saline (TBST) for one hour at room temperature with agitation. Primary antibodies (Additional file : Table S2) were tested and optimized to raise the least amount of background signals. Signal amplification was achieved by incubation with an appropriate HRP-conjugated immunoglobulins (Agilent) in a 1:2000 dilution in either 5% blocking-grade non-fat dry milk/TBST or 5% BSA/TBST solution. The signal was detected using the Pierce™ ECL Western Blotting Substrate (Thermo Scientific™; 32106 ). The West Femto Maximum Sensitivity Substrate (Thermo Scientific™; 34095 ) was used For ADNP detection specifically . Image acquisition was executed with the Amersham™ Imager 680 (Cytiva). Monoclonal GAPDH (Cell signaling technology; 4317 ), Histone H3 (Abcam; 10799 ), and β-actin (Sigma-Aldrich; A5441 ) (Table S3) were used as loading controls for all the experiments. Image Quantification was performed using ImageJ software. Graphical representation was performed in GraphPad Prism version 9.3.1 using an unpaired student T-test assuming equal variances and normal distribution. Full western blot images are show as supplementary materials (Additional file : Data S11). Human methylation EPIC BeadChip array and data processing Total DNA was isolated from the cerebellar tissue of the post-mortem control subject and patient cerebellum (n = 1) using the DNeasy Blood and Tissue Kit (Qiagen; 69504 ) according to the manufacturer’s instructions. Subsequently, bisulfite conversion of 250 ng isolated DNA was performed using the EZ DNA Methylation Kit (Zymo Research, D5001). To confirm successful bisulfite conversion, a methylation-conserved fragment of the human SALL3 gene was amplified using the following primers: 5′-GCGCGAGTCGAAGTAGGGC-3′ as forward primer and 5′-ACCCAACGATACCTAATAATAAAACC-3 as reverse primer with the PyroMark PCR kit (Qiagen; 978703 ). Amplified products were separated on a 1.5% agarose gel stained with GelRed® Nucleic Acid Gel Stain (Biotium; 41002 ). The TrackIt™ 100 bp DNA Ladder (Invitrogen; 10488-058 ) was used as a reference marker. Bisulfite-converted samples were hybridized on the Infinium Human Methylation EPIC BeadChip (Illumina; 20020531 ) as described in the manufacturer’s protocol. EPIC chips will be analyzed using the Illumina Hi-Scan system, a platform integrating more than 850,000 methylation sites quantitatively across the genome at single-nucleotide resolution. Raw intensity files were first quality checked and processed using the minfi package (v 1.38.0) . Signal intensities were normalized using quantile normalization and beta values were calculated. Probes with a detection p -value higher than 0.01 were excluded. Non-CpG probes, probes with known single nucleotide polymorphisms (SNPs), multihit probes and probes on the X-/Y-chromosomes were filtered out. Probe annotation was carried out using the Illumina Infinium MethylationEPIC v1.0 B5 manifest file. All annotations (i.e., CpG islands, shelve, and shore regions) are reported based on the GRCh37/hg19 human genome build. We calculated the difference in methylation of the signals acquired in patient versus control subject with a focus on CpG probes showing over 20% methylation, i.e., hypomethylation (Δβ-values < −0.2) and hypermethylation (Δβ-value > 0.2). We determined gene ontology enrichment using the Metascape webtool . Protein–protein network interactions of ADNP with the identified hypermethylated and hypomethylated genes were predicted using the STRING database version 12.0. The iRegulon plugin in Cytoscape was used to detect the transcription factors, their targets, and the motifs/tracks associated with co-expression of the hypomethylated and hypermethylated genes. Targeted pyrosequencing analysis Biologically-relevant genes exhibiting a methylation difference of Δβ > 0.2 (hypermethylation) and Δβ < −0.2 (hypomethylation) between patient and control cerebellum were selected for pyrosequencing validation. Briefly, the required primers (i.e., forward, reverse, and sequencing primers) were designed using the PyroMaker Assay Design 2.0 software (Qiagen) according to the manufacturer’s instructions (Additional file : Table S3). Bisulfite-converted DNA fragments were PCR amplified using the PyroMark PCR kit (Qiagen; 978703 ). Successful PCR amplification was assessed by tris-boric acid-EDTA (TBE) electrophoresis at 1.5% agarose gel, after which the PyroMark Q24 Instrument (Qiagen) was used to perform pyrosequencing. Biotinylated PCR products were immobilized on streptavidin-coated Sepharose beads (GE Healthcare; 17511301 ), captured by the PyroMark vacuum Q24 workstation, washed and denatured. Single-stranded PCR products were subsequently released into a 24-well plate and annealed to the sequencing primer for 5 min at 80 °C. After completion of the pyrosequencing run, results were analyzed using the PyroMark Q24 software (Qiagen). Graphical representation was performed with GraphPad Prism version 9.3.1. Total RNA extraction and sequencing of post-mortem brains and ADNP lymphoblastoid cell lines Total RNA was extracted from the cerebellum of the control subject and the patient with the c.1676duplA/p.His559Gln*3 ADNP mutation (n = 1) as well as from control and patient LCLs with different ADNP mutations (n = 4 controls, n = 6 patients) using the RNeasy Mini Kit (Qiagen; 74106 ) according to the manufacturer’s protocol. RNA concentration was determined with the Qubit™ RNA Broad Range Assay Kit (Invitrogen™; Q10211 ) and the 260/280 ratio, indicative of RNA purity, was checked using NanoDrop™ 2000/2000c Spectrophotometer (Thermo Scientific™; ND-2000 ). RNA integrity was verified with Agilent RNA Screentape Assay on the 2200 TapeStation instrument (Agilent; G2964AA ). Samples with the highest RIN score (RIN > 6.5) were selected and sent to Novogene for RNA sequencing (RNAseq) (Additional file : Table S1). All sequencing data was mapped to the human annotated genome GRCh38.p13 (Ensembl v106) with STAR, after adapter removal and reads cleaning with trimmomatic. Gene expression quantification was performed with featureCounts (subread package). We calculated gene expression differences in our post-mortem brains using NOISeq (R package), a non-parametric method for one-versus-one cases that reports the log2-ratio of the two conditions (M) and the value of the difference between conditions (D). A gene is considered to be differentially expressed if its corresponding M and D values are likely to be higher than in noise. A similar analysis was performed for the functional enrichment exploration for the up- and downregulated genes found by NOISeq at q > 0.95. Differential gene expression analysis for the LCL samples was performed with the DESeq2, an R package. The genes having a BH-adjusted FDR, p value < 0.05, and an absolute value of log2FC > = 0.5 were considered biologically relevant and further analyzed for functional enrichment (clusterProfiler R package with fGSEA function for the gene set enrichment analysis and enrichGO for overrepresentation analysis in GO ontologies and KEGG pathways). Additional data visualization was supported by BigOmics, a user-friendly and interactive cloud computing based bioinformatics platform for the in-depth analysis, visualization, and interpretation of transcriptomics data . Ultimately, we performed a meta-analysis of the differentially expressed genes identified in the LCLs and post-mortem brains based on gene ID intersection and looked for conserved ADNP-relevant genes beyond their tissue-specific expression (brains versus LCLs). RT-PCR gene expression analysis RT-PCR was used to confirm a selection of genes from the RNA sequencing experiment (LCLs, post-mortem brains and common genes between data sets) by converting 1 µg of total extracted RNA to cDNA using the SuperScript™ III Reverse Transcriptase kit (Invitrogen™; 18080093 ). Primer efficiencies were optimized using a standard dilution curve method on pooled cDNA samples from controls and patients per dataset (90% > E > 110%). RT-PCR was performed in triplicate using the CFX384 Touch Real-Time PCR Detection System (BioRad; 1855484 ) with primers listed in Additional file : Table S4 using the Takyon™ No ROX SYBR 2X MasterMix (Eurogentec; UF-NSMT-B0701 ). Reference gene stability was assessed using the geNorm method in qbase + (Biogazelle), after which were selected for normalization. Data analysis was performed in qbase + (Biogazelle) with a maximum deviation of 0.5 per triplicate using the stable housekeeping genes ACTB , B2M , and UBC . Statistical analysis was performed in GraphPad Prism 9.3.1 using an unpaired student T-test assuming unequal variances (post-mortem brains) and a Mann Witney U-test for unpaired measure (LCLs). Label-free quantification (LFQ) mass spectrometry Cerebellar tissue obtained from the post-mortem control subject and patient was homogenized with the TissueRuptor II (Qiagen; 9002755 ) with mixing at the lowest speed. Tissues were lysed and homogenized in ice cold RIPA buffer (150 mM NaCl, 50 mM Tris, 0.5% sodium deoxycholate, 1% NP-40 and 2% sodium dodecyl sulfate), supplemented with the cOmplete™, Mini, EDTA-free Protease Inhibitor Cocktail (Roche; 04693159001) together with PhosSTOP™ phosphatase inhibitor (Roche; 4906845001 ). Lysis occurred for one hour at 4 °C with agitation and cell debris was removed by centrifuging 30 min at maximal speed in a precooled centrifuge. The protein concentration was estimated with the Pierce™ BCA Protein Assay Kit (ThermoScientific™; 23225 ). Protein reduction, alkylation and digestion were performed with the ProteoSpin™ On-Column Proteolytic Digestion Kit (Norgen; 17500 ) according to manufacturer’s protocol. A nano-liquid chromatography (LC) column (Dionex ULTIMATE 3000) coupled online to a Q Exactive™ Plus Hybrid Quadrupole-Orbitrap™ Mass Spectrometer (Thermo Scientific™) was used for the MS analysis. Peptides were loaded for five technical replicates onto a 75 μm × 150 mm, 2 μm fused silica C18 capillary column, and mobile phase elution was performed using buffer A (0.1% formic acid in Milli-Q water) and buffer B (0.1% formic acid in 80% acetonitrile/Milli-Q water). The peptides were eluted using a gradient from 5% buffer B to 95% buffer B over 120 min at a flow rate of 0.3 μL/min. The LC eluent was directed to an ESI source for Orbitrap analysis. The MS was set to perform data dependent acquisition in the positive ion mode for a selected mass range of 375–2000 m/z for quantitative expression difference at the MS1 (140,000 resolution) level followed by peptide backbone fragmentation with normalized collision energy of 28 eV, and identification at the MS2 level (17,500 resolution). The *.RAW files were exported and processed in PEAKS AB 2.0 (Bioinformatics Solutions Inc.). The files were searched using target-decoy matching using the human UniProt database, with the false discovery rate set at 1%. Trypsin was indicated as the enzyme and up to two miscleavages were allowed. Carbamidomethylation was set as a fixed modification. Label-Free Quantification (LFQ) and Match Between Runs were used using default settings. PEAKS intensities were uploaded in MetaboAnalyst5.0, subsequently quantile normalized, log-transformed and autoscaled. An unpaired student T-test was used to compare the LFQ intensities between groups and those with p -values ≤ 0.05 were considered significant. The protein IDs with significant values were subjected to Ingenuity Pathway Analysis (IPA) and the String Database to identify affected canonical pathways and functional protein–protein interaction network. A selection of differentially expressed proteins was ultimately confirmed with immunoblotting as described above. Animals Male C57BL/6JCr wild-type mice were purchased from Charles River at the age of 10 weeks with a body weight of 25 g. Animals were socially housed with a maximum of eight animals in standard mouse cages (22.5 cm × 16.7 cm × 14 cm) at constant humidity and temperature in a 12/12 h light–dark cycle. Food and water were available ad libitum. Cage enrichment was supplied by a platform, tunnel, and extra cotton sticks. Ex vivo experiments, such as immunohistochemistry (IHC) and co-immunoprecipitation (CoIP), were performed with cerebellar tissue at the age of 10 weeks. All conducted experiments were in compliance with the EU Directive 20,120/63/EU under ECD code 2022–59 after approval by the Animal Ethics Committee of the University of Antwerp. Immunohistochemistry of frozen murine brain sections (IHC-Fr) Male C57BL/6JCr wild-type mice were used for immunohistochemistry experiments at the age of 10 weeks. All animals were anesthetized by an intraperitoneal injection of 133.3 mg/kg Dolethal (Vetoquinol; BE-V171692 ), then transcardially perfused for four minutes with 0.1 M phosphate-buffered saline (PBS), subsequently for six minutes with ROTI®Histofix 4% paraformaldehyde solution (pH 7) (Carl Roth; 3105 ) using steady perfusion rate of 12 rpm (2 ml/min). Whole brains were removed from the skull and cut in half along the midline. The two hemispheres were placed in ROTI®Histofix 4% paraformaldehyde (pH 7) (Carl Roth; 3105 ) for two hours at room temperature, washed in PBS (0.01 M; pH 7.4) and transferred in 20% sucrose/PBS for overnight incubation at 4 °C. Tissue samples were embedded in PELCO® Cryo-Embedding Compound (Ted Pella, Inc.; 27300 ) and stored at − 80C. Tissue was cut in sections of approximately 10 µm thickness using the Leica CM1950 Cryostat Microtome (Leica Biosystems, Wetzlar, Germany) and transferred to VWR® SuperFrost® Plus, Adhesion Slides (VWR; 631-0108 ). The sections were washed three times using PBS. After blocking and permeabilization with PBS containing 0.05% thimerosal, 0.01% NaN 3 , 0.1% BSA, 1% Triton X-100 and 10% normal horse serum, sections were incubated overnight with primary ADNP antibody (Abcam; ab300114 ) or SIRT1 (Abcam; ab189494 ) antibody 1:500 diluted in the blocking/permeabilization buffer at room temperature. Tissue sections were washed six times with PBS, followed by a 4-h incubation with Cy3-conjugated Fab Fragment donkey anti-rabbit (Jackson ImmnoResearch Europe Ltd; 711-167-003 ) antibody 1:2000 diluted in PBS containing 0.05% thimerosal, 0.01% NaN3, 0.1% BSA and 10% normal horse serum. After six final washing steps in PBS, nuclei were stained with 5 µg/ml DAPI for 5 min, followed by three washes in PBS. Immunostained cryosections were mounted in Citifluor™ AF1 Mountant Solution (Electron Microscopy Sciences; 17,970–100 ). Confocal images were obtained using a Leica SP8 confocal scanning microscope (Leica-microsystems, Wetzlar, Germany) equipped with a 405-nm diode laser (to detect DAPI) and a white light laser (WLL) used at 555 nm to visualize Cy3. Images were acquired with a 20 × objective (HC PL APO 20x/0.75 IMM CORR CS2). Acquired images were analyzed in FIJI image analysis freeware . Nuclei were identified as DAPI + regions after automated thresholding of the smoothed DAPI channel (gaussion blur with kernel size 2). Co-immunoprecipitation (Co-IP) assay Proteins were extracted from the wild-type mouse cerebellum using N-PER™ Neuronal Protein Extraction Reagent (Thermo Scientific; 87,792 ), supplemented with 1 mg NAP/Davunetide (MedChemExpress; HY-105066 ) to enhance EB1/EB3 binding , and subjected to Co-IP analysis with the Pierce™ Co-Immunoprecipitation Kit (Thermo Scientific™; 26149 ) according to the manufacturer’s protocol. Briefly, 10 μg of antibodies of interest EB1 (Abcam; ab53358 ) and EB3 (Abcam; ab157217 ) were cross-linked to 50 μl of AminoLink Plus Coupling Resin. An amount of 1 mg of protein lysate was incubated overnight at 4 °C on an end-over-end shaker (VWR; 444-0503 ). Protein elution was performed in three steps: 10 μL, 35 μl, and 50 μl respectively. The immunoprecipitated materials were subsequently investigated by immunoblotting using the following primary antibodies (Additional file : Table S2): rat monoclonal EB1 (Abcam; ab53358 ), rabbit monoclonal EB3 (Abcam; ab157217 ), rabbit monoclonal ADNP antibody (Abcam; ab300114 ) and SIRT1 (Abcam; ab189494 ). In addition, Pierce™ Control Agarose Resin (crosslinked 4% beaded agarose) was used as negative control (IgG). Upon immunoblotting (see above) proteins were visualized using the SuperSignal™ West Femto Maximum Sensitivity Substrate (ThermoScientific™; 34094 ) after labeling with the appropriate secondary antibody (Agilent) in a 1:2000 dilution. Motif analysis and molecular docking of ADNP and SIRT1 to microtubule-end binding proteins 1 and 3 (EB1/3) Motif analysis of murine Adnp (UniProt; Q9Z103), Sirt1 (UniProt; Q53Z05), Eb1 (UniProt; Q61166), and Eb3 (UniProt; Q6PER3) was performed using Eukaryotic Linear Motif (ELM) ( http://elm.eu.org ). Three-dimensional models were either generated with AlfaFold ( https://alphafold.ebi.ac.uk ) or obtained from the AlfaFold database (see above) and used to predict protein–protein interactions between Adnp and Sirt1 with both Eb1 and Eb3 using the ClusPro server (ClusPro2.0)141-144 ( https://cluspro.org ), a widely-used protein–protein docking tool. The top 10 resulting motifs were superimposed in UCFS ChimeraX (version 1.6.1.) to present the most probable binding interaction with Adnp and Sirt1. Screening RNA sequencing data using a mitophagy gene panel Mitophagy-related gene signature was obtained by clustering analysis of RNA sequencing data from the ADNP brain autopsy and LCLs of ADNP patients and control lines upon gene set enrichment analysis with a customized gene toolbox in the Omics playground v2.8.22 (Additional file : Table S5). We confirmed the expression of mitophagy- and mitochondrial-related genes (Additional file : Table S4) using RT-PCR as described above. Autophagy flux assessment The autophagy flux was determined in ADNP patient and control LCLs by treatment with 160 nM of bafilomycin A1 (Santa Cruz Biotechnology, sc-2021550) for 2 h. Untreated cells and bafilomycin A1-treated cells were collected by centrifugation, and subjected to western blotting as described above. We detected expression of autophagy markers anti-p62/SQSTM1 (Abcam; ab56416 ) with an 1:2000 dilution and anti-LC3 (Abcam; ab192890 ) with an 1:1000 dilution in untreated and treated conditions to assess the autophagy flux. All western blots were controlled by GAPDH incubation (Cell signaling technology; 4317 ). Image Quantification was performed using ImageJ software. Graphical representation was performed in GraphPad Prism version 9.3.1 using a 2-way ANOVA with Šídák's multiple comparisons test. Live cell imaging: mitochondrial redox state and subcellular localization Intact fibroblasts of control and ADNP patients (n = 2) were seeded at a density of 4 × 10 6 cells in a 6-well plate for live cell imaging and subsequently stained with 250 nM fluorescent probe MitoTracker® Red CM-H2XRos (Invitrogen; M7513 ) according to the manufacturer’s protocol to assess mitochondrial redox state and subcellular localization. The redox state of mitochondria is determined by the levels of NAD + /NADH, FAD/FADH2, NADP + /NADPH, glutathione/glutathione disulfide (GSH/GSSG) and reactive oxygen species (ROS), which reflect mitochondrial metabolic activity and overall fitness. If the electron transport chain is compromised or there is an imbalance in the redox state, leading to increased ROS production. When CM-H2XRos enters the mitochondria, they are oxidized depending on the relative amount of reactive oxygen species (ROS) present in the mitochondria. The oxidation leads to a change in the fluorescence properties of the red dye. The emitted red fluorescence signal was measured with a multimode microplate reader (Tecan Spark™) and analyzed using the Spark Control™ V3.2 application. The fluorescent signal was statistically quantified using an unpaired Student T-test assuming equal variances in GraphPad Prism 9.3.1. Additionally, fibroblasts were imaged with the Olympus CKX53 fluorescence microscope (Olympus, Antwerp, Belgium) to visualize subcellular localization of the mitochondria with better resolution. Determination of mitochondrial DNA copy number Total DNA was isolated from ADNP LCLs and skin fibroblasts as compared to their controls using the DNeasy Blood and Tissue Kit (Qiagen; 69504 ) according to the manufacturer’s instructions. Mitochondrial DNA copy number (mtDNA-CN) was determined using RT-PCR. Briefly,the cycle threshold (Ct) value of a mitochondrial-specific ( tRNAleu ) and nuclear-specific ( B2M ) target were determined in triplicate for each sample using the following primers: tRNA LEU -Fwd: 5′-CACCCAAGAACAGGGTTTGT-3′ and tRNA LEU -Rev: 5′-TGGCCATGGGTATGTTGTTA-3′ and B2M-Fwd: 5′-TGCTGTCTCCATGTTTGATGTATCT-3′ and B2M-Rev: 5′-TCTCTGCTCCCCACCTCTAGGT-3′. The difference in Ct-values (ΔCt) for each replicate represents a raw relative measure of mtDNA-CN. Seahorse XF cell mito stress test ADNP patient and unrelated sex- and age-matched control fibroblasts (n = 2) were cultured on Seahorse XFp miniplates with a density of 4 × 10 4 cells (Agilent Technologies; 103725-100 ) and incubated overnight at 37% O 2 /5% CO 2 . Prior to the Seahorse XS Cell Mito Stress Test assay, the fibroblast medium was replaced with Seahorse XF RPMI medium (pH 7.4) (Agilent Technologies; 103576-100 ), supplemented by 1.0 M Seahorse XF Glucose Solution (Agilent Technologies; 103577-100 ), 100 mM Seahorse XF Pyruvate Solution (Agilent Technologies; 103578-100 ), and 200 mM Seahorse XF L-Glutamine Solution (Agilent Technologies; 103579-100 ). The drug ports of the sensor cartridge were loaded with 1 µM Oligomycin (port A), 0.7 µM Carbonyl cyanide-4- (trifluoromethoxy)phenylhydrazone (FCCP) (port B), and 0.5 µM Rotenone/Antimycin A (Rot/AA) (port C). Next, cells seeded in the Seahorse XF HS Miniplates, together with the sensor cartridge, were loaded into the Seahorse XF HS Mini Analyzer (Agilent; S7852A) and subjected to the Agilent Cell Mito Stress Test assay (Agilent; 103010-100) to determine the real-time oxygen consumption rate (OCR) for 1.5 h. First, the baseline respiration was measured (basal OCR) prior to mitochondrial perturbation by sequential injection of 1.5 µM oligomycin (a complex V inhibitor to decrease the electron flow through electron transport chain (ETC)); 3 µM FCCP (the uncoupling agent to promote maximum electron flow through ETC), and a mixture of 0.5 µM Rotenone/Antimycin A (complex I and complex II inhibitors, respectively, to shut down the mitochondria-related respiration). All compounds were included in the Seahorse XFp Cell Mito Stress Test Kit (Agilent; 103010-100 ). The data was analyzed using Agilent Seahorse analytics (Agilent Seahorse Analytic). Statistical analysis was performed in GraphPad Prism 9.3.1 using an unpaired student T-test assuming equal variances. Clinical information of a nine-year-old female patient was obtained under informed written consent from the Institute Born-Bunge vzw IBB NeuroBioBank of the University of Antwerp and transferred with written informed consent under HMTA20210040 after approval of the Ethics Committee of the Antwerp University Hospital/University of Antwerp. The female subject, used as a control in this study, showed symptoms analogous to a sporadic form of Rett syndrome and died following obstructive apnea. Twelve hours after death, cerebellar tissue was collected during the autopsy and frozen in liquid nitrogen or fixed in formaldehyde. Frozen section, paraffin sections, and celloidin embeddings were extensively investigated by an expert pathologist, resulting in no morphological abnormalities in all brain regions, except some fibrillar gliosis in the hippocampus. Comparisons to other sections of an age-matched control showed similar cytology and no neuronal loss. The substantia nigra contained normal amounts of melanin granules and the cerebellum contained no loss of Purkinje cells. Clinical information of a dead six-year-old male patient with the heterozygous c.1676dupA/p.His559Glnfs*3 ADNP mutation was received under informed consent under B300201627322 and approved by the Ethics Committee of the Antwerp University Hospital. The patient died because of multiple organ failure. Cerebellar tissue was collected during autopsy following a 35-h post-mortem interval, subsequently frozen in liquid nitrogen or fixed in formaldehyde. Clinical evaluation was performed by at least one expert clinical geneticist. The ADNP mutation was confirmed by Sanger sequencing using the forward primer 3′-TGATGTGCAAGTGCATCAGA-5′ and reverse primer 3′-TGTGCACTTCGAAAAAGAACAT-5′. Conservation of the amino acids changed by the ADNP mutation was verified using ClustalW. The pCMV3 expression vector encoding human wild-type ADNP fused to either an N-terminal GFPSpark® or N-DYKDDDDK (Flag®) tag was purchased from Sino Biological ( HG11106-ANG ; HG11106-NF ). The c.1676duplA mutation was introduced in the N-DYKDDDDK (Flag®) ADNP expression vector by PCR mutagenesis using the Q5® Site-Directed mutagenesis kit (New England Biolabs; E0554S ) according to manufacturer’s protocol. Mutagenesis primers were designed using the NEBaseChanger Tool ( http://nebasechanger.neb.com/ ). The mutation was inserted with the forward primer: 5′-ACACTAACATCCATCTCCTG-3’ and the reverse primer: 5′-TGACTACCCTGCTGCAAT-3′ by thermocycling with an annealing temperature of 60 °C. DNA was purified from transformed high-efficiency NEB 5-alpha competent E. coli cells using the NucleoSpin Plasmid EasyPure Mini kit (Macherey Nagel; 740727.50 ) according to the manual. The mutation was confirmed by Sanger sequencing. HEK293T cells (ATCC; CRL-3216™ ) were cultured at low passage number in DMEM (Gibco™; 11965092 ), supplemented with 10% fetal bovine serum (Gibco™; 26140079 ) and 1% penicillin/streptomycin (Gibco™; 15070063 ). Age- and sex matched Epstein-Barr virus transformed lymphoblastoid cell lines (LCLs) of healthy subjects (n = 4) and patients with different ADNP mutations (n = 6) (Additional file : Table S1) were cultured in RPMI (Gibco™; A1049101), supplemented with 15% fetal bovine serum (Gibco™; 26140079), 1% penicillin/streptomycin (Gibco™; 15070063), 1% sodium pyruvate (Gibco™; 11360070), and 1% GlutaMAX (Gibco™; 35050061). Age- and sex matched skin fibroblasts of two unrelated asymptomatic subjects and two patients with different ADNP mutations (n = 2) (Additional file : Table S1) were cultured in RPMI (Gibco™; A1049101), supplemented with 15% fetal bovine serum (Gibco™; 26140079), 1% penicillin/streptomycin (Gibco™; 15070063), 1% sodium pyruvate (Gibco™; 11360070), and 1% GlutaMAX (Gibco™; 35050061). Human primary cell lines were obtained from consenting individuals, guardians, tending clinicians, or parents. All procedures were carried out following the guidelines and regulations of the University of Antwerp/University Hospital of Antwerp (UZA) and approved by the Ethics Committee of the Antwerp University Hospital. All cell lines were cultured in a humidified incubator at 37%O 2 /5%CO 2 . The predicted 3D-structure of human wild-type ADNP (Uniprot; Q9H2P0) was acquired using the AlfaFold Protein Structure Database ( https://alphafold.ebi.ac.uk/ ). The p.His559Glnfs*3 mutant was queried in the amino acid sequence of wild-type ADNP. Wild-type and mutant ADNP proteins were modeled using AlfaFold2 with ColabFold ( https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/AlphaFold2.ipynb ), an online integrating AlfaFold2 pipeline for protein structure modeling combined with many-against-many sequence searching (MMSeqs2), and HHSearch. ChimeraX (UCSF, version 1.5) was used for visualization and annotation of the structural ADNP protein domains using the generated PBD output file as input. HEK293T were transiently transfected with 5 µg of human expression vectors: (1) wild-type ADNP with an N-terminal GFPSpark®-tag, (2) wild-type ADNP with an N-terminal DYKDDDDK (Flag®)-tag, or (3) mutant c.1676dupA ADNP fused to an N-terminal DYKDDDDK (Flag®)-tag using Lipofectamine™ 3000 Transfection Reagent (Invitrogen; L3000008 ) in accordance with the manufacturer’s protocol. Co-transfections were performed with equal amounts of both wild-type and mutant ADNP expression vectors. Transfection efficiency was about 70% in line with the manufacturer’s tested performance. Cells were harvested after 24-h for subcellular protein fractionation followed by western blotting. After transient transfection of the ADNP expression vector of interest, HEK293T cells were detached with TrypLE™ Express Enzyme (1X), phenol red (Gibco™; 12605028 ), subsequently washed with ice-cold DPBS (Gibco™; 14040133 ). Cerebellar tissue obtained from the post-mortem control subject and died ADNP patient was homogenized with the TissueRuptor II (Qiagen; 9002755 ) with mixing at the lowest speed. For total protein extraction, cells and tissue were lysed in ice cold RIPA buffer (150 mM NaCl, 50 mM Tris, 0.5% sodium deoxycholate, 1% NP-40 and 2% sodium dodecyl sulfate), supplemented with the cOmplete™, Mini, EDTA-free Protease Inhibitor Cocktail (Roche; 04693159001 ) together with PhosSTOP™ phosphatase inhibitor (Roche; 4906845001). Lysis occurred for 15 min at 4 °C with agitation and cell debris was removed by centrifuging 15 min at maximal speed in a precooled centrifuge. For subcellular fractionation of transfected HEK293T cells, a final amount of 10 × 10 6 cells was lysed and gradually separated in cytoplasmic, membrane, nuclear soluble, chromatin-bound, and cytoskeletal protein extracts using the Subcellular Protein Fractionation Kit for Cultured Cells (Thermo Scientific™; 78840 ) following the manufacturer’s instructions. The protein concentration was estimated with the Pierce™ BCA Protein Assay Kit (Thermo Scientific™; 23225 ). ADNP expression was investigated in the cytoplasmic, chromatin-bound, and cytoskeletal protein fractions. A total amount of 20 μg protein lysate was reduced with NuPAGE™ Sample Reducing Agent (Invitrogen; NP0009 ) in NuPAGE™ LDS Sample Buffer (Invitrogen; NP0007 ). Samples were heated for 10 min at 70 °C and subsequently loaded for separation using a Bolt™ 4 to 12%, Bis–Tris, 1.0 mm, Mini Protein Gels (Invitrogen; NW04120BOX ) using Bolt™ MOPS SDS Running Buffer (Invitrogen; B0001 ) at 120 V. The Precision Plus Protein™ All Blue Prestained Protein Standard (Biorad; #1610373 ) was used for estimation of the molecular weight in all experiments. After separation, proteins were transblotted onto Amersham™ Protran® Premium nitrocellulose membranes (Cytiva; GE10600008 ) using a Mini Trans-Blot® cell (Biorad; 1703930 ) with a transfer buffer containing 25 mM Tris, 192 mM glycine and 20% methanol (pH 8.3). Successful protein transfer was checked with a Ponceau S solution (Sigma Aldrich; P7170 ). Nitrocellulose membranes were blocked with either 5% blocking-grade non-fat dry milk (NFDM) (Carl Roth; T145.4 ) or 5% bovine serum albumin (BSA) (Carl Roth; CP84.1 ) dissolved in tris-buffered saline (TBST) for one hour at room temperature with agitation. Primary antibodies (Additional file : Table S2) were tested and optimized to raise the least amount of background signals. Signal amplification was achieved by incubation with an appropriate HRP-conjugated immunoglobulins (Agilent) in a 1:2000 dilution in either 5% blocking-grade non-fat dry milk/TBST or 5% BSA/TBST solution. The signal was detected using the Pierce™ ECL Western Blotting Substrate (Thermo Scientific™; 32106 ). The West Femto Maximum Sensitivity Substrate (Thermo Scientific™; 34095 ) was used For ADNP detection specifically . Image acquisition was executed with the Amersham™ Imager 680 (Cytiva). Monoclonal GAPDH (Cell signaling technology; 4317 ), Histone H3 (Abcam; 10799 ), and β-actin (Sigma-Aldrich; A5441 ) (Table S3) were used as loading controls for all the experiments. Image Quantification was performed using ImageJ software. Graphical representation was performed in GraphPad Prism version 9.3.1 using an unpaired student T-test assuming equal variances and normal distribution. Full western blot images are show as supplementary materials (Additional file : Data S11). Total DNA was isolated from the cerebellar tissue of the post-mortem control subject and patient cerebellum (n = 1) using the DNeasy Blood and Tissue Kit (Qiagen; 69504 ) according to the manufacturer’s instructions. Subsequently, bisulfite conversion of 250 ng isolated DNA was performed using the EZ DNA Methylation Kit (Zymo Research, D5001). To confirm successful bisulfite conversion, a methylation-conserved fragment of the human SALL3 gene was amplified using the following primers: 5′-GCGCGAGTCGAAGTAGGGC-3′ as forward primer and 5′-ACCCAACGATACCTAATAATAAAACC-3 as reverse primer with the PyroMark PCR kit (Qiagen; 978703 ). Amplified products were separated on a 1.5% agarose gel stained with GelRed® Nucleic Acid Gel Stain (Biotium; 41002 ). The TrackIt™ 100 bp DNA Ladder (Invitrogen; 10488-058 ) was used as a reference marker. Bisulfite-converted samples were hybridized on the Infinium Human Methylation EPIC BeadChip (Illumina; 20020531 ) as described in the manufacturer’s protocol. EPIC chips will be analyzed using the Illumina Hi-Scan system, a platform integrating more than 850,000 methylation sites quantitatively across the genome at single-nucleotide resolution. Raw intensity files were first quality checked and processed using the minfi package (v 1.38.0) . Signal intensities were normalized using quantile normalization and beta values were calculated. Probes with a detection p -value higher than 0.01 were excluded. Non-CpG probes, probes with known single nucleotide polymorphisms (SNPs), multihit probes and probes on the X-/Y-chromosomes were filtered out. Probe annotation was carried out using the Illumina Infinium MethylationEPIC v1.0 B5 manifest file. All annotations (i.e., CpG islands, shelve, and shore regions) are reported based on the GRCh37/hg19 human genome build. We calculated the difference in methylation of the signals acquired in patient versus control subject with a focus on CpG probes showing over 20% methylation, i.e., hypomethylation (Δβ-values < −0.2) and hypermethylation (Δβ-value > 0.2). We determined gene ontology enrichment using the Metascape webtool . Protein–protein network interactions of ADNP with the identified hypermethylated and hypomethylated genes were predicted using the STRING database version 12.0. The iRegulon plugin in Cytoscape was used to detect the transcription factors, their targets, and the motifs/tracks associated with co-expression of the hypomethylated and hypermethylated genes. Biologically-relevant genes exhibiting a methylation difference of Δβ > 0.2 (hypermethylation) and Δβ < −0.2 (hypomethylation) between patient and control cerebellum were selected for pyrosequencing validation. Briefly, the required primers (i.e., forward, reverse, and sequencing primers) were designed using the PyroMaker Assay Design 2.0 software (Qiagen) according to the manufacturer’s instructions (Additional file : Table S3). Bisulfite-converted DNA fragments were PCR amplified using the PyroMark PCR kit (Qiagen; 978703 ). Successful PCR amplification was assessed by tris-boric acid-EDTA (TBE) electrophoresis at 1.5% agarose gel, after which the PyroMark Q24 Instrument (Qiagen) was used to perform pyrosequencing. Biotinylated PCR products were immobilized on streptavidin-coated Sepharose beads (GE Healthcare; 17511301 ), captured by the PyroMark vacuum Q24 workstation, washed and denatured. Single-stranded PCR products were subsequently released into a 24-well plate and annealed to the sequencing primer for 5 min at 80 °C. After completion of the pyrosequencing run, results were analyzed using the PyroMark Q24 software (Qiagen). Graphical representation was performed with GraphPad Prism version 9.3.1. Total RNA was extracted from the cerebellum of the control subject and the patient with the c.1676duplA/p.His559Gln*3 ADNP mutation (n = 1) as well as from control and patient LCLs with different ADNP mutations (n = 4 controls, n = 6 patients) using the RNeasy Mini Kit (Qiagen; 74106 ) according to the manufacturer’s protocol. RNA concentration was determined with the Qubit™ RNA Broad Range Assay Kit (Invitrogen™; Q10211 ) and the 260/280 ratio, indicative of RNA purity, was checked using NanoDrop™ 2000/2000c Spectrophotometer (Thermo Scientific™; ND-2000 ). RNA integrity was verified with Agilent RNA Screentape Assay on the 2200 TapeStation instrument (Agilent; G2964AA ). Samples with the highest RIN score (RIN > 6.5) were selected and sent to Novogene for RNA sequencing (RNAseq) (Additional file : Table S1). All sequencing data was mapped to the human annotated genome GRCh38.p13 (Ensembl v106) with STAR, after adapter removal and reads cleaning with trimmomatic. Gene expression quantification was performed with featureCounts (subread package). We calculated gene expression differences in our post-mortem brains using NOISeq (R package), a non-parametric method for one-versus-one cases that reports the log2-ratio of the two conditions (M) and the value of the difference between conditions (D). A gene is considered to be differentially expressed if its corresponding M and D values are likely to be higher than in noise. A similar analysis was performed for the functional enrichment exploration for the up- and downregulated genes found by NOISeq at q > 0.95. Differential gene expression analysis for the LCL samples was performed with the DESeq2, an R package. The genes having a BH-adjusted FDR, p value < 0.05, and an absolute value of log2FC > = 0.5 were considered biologically relevant and further analyzed for functional enrichment (clusterProfiler R package with fGSEA function for the gene set enrichment analysis and enrichGO for overrepresentation analysis in GO ontologies and KEGG pathways). Additional data visualization was supported by BigOmics, a user-friendly and interactive cloud computing based bioinformatics platform for the in-depth analysis, visualization, and interpretation of transcriptomics data . Ultimately, we performed a meta-analysis of the differentially expressed genes identified in the LCLs and post-mortem brains based on gene ID intersection and looked for conserved ADNP-relevant genes beyond their tissue-specific expression (brains versus LCLs). RT-PCR was used to confirm a selection of genes from the RNA sequencing experiment (LCLs, post-mortem brains and common genes between data sets) by converting 1 µg of total extracted RNA to cDNA using the SuperScript™ III Reverse Transcriptase kit (Invitrogen™; 18080093 ). Primer efficiencies were optimized using a standard dilution curve method on pooled cDNA samples from controls and patients per dataset (90% > E > 110%). RT-PCR was performed in triplicate using the CFX384 Touch Real-Time PCR Detection System (BioRad; 1855484 ) with primers listed in Additional file : Table S4 using the Takyon™ No ROX SYBR 2X MasterMix (Eurogentec; UF-NSMT-B0701 ). Reference gene stability was assessed using the geNorm method in qbase + (Biogazelle), after which were selected for normalization. Data analysis was performed in qbase + (Biogazelle) with a maximum deviation of 0.5 per triplicate using the stable housekeeping genes ACTB , B2M , and UBC . Statistical analysis was performed in GraphPad Prism 9.3.1 using an unpaired student T-test assuming unequal variances (post-mortem brains) and a Mann Witney U-test for unpaired measure (LCLs). Cerebellar tissue obtained from the post-mortem control subject and patient was homogenized with the TissueRuptor II (Qiagen; 9002755 ) with mixing at the lowest speed. Tissues were lysed and homogenized in ice cold RIPA buffer (150 mM NaCl, 50 mM Tris, 0.5% sodium deoxycholate, 1% NP-40 and 2% sodium dodecyl sulfate), supplemented with the cOmplete™, Mini, EDTA-free Protease Inhibitor Cocktail (Roche; 04693159001) together with PhosSTOP™ phosphatase inhibitor (Roche; 4906845001 ). Lysis occurred for one hour at 4 °C with agitation and cell debris was removed by centrifuging 30 min at maximal speed in a precooled centrifuge. The protein concentration was estimated with the Pierce™ BCA Protein Assay Kit (ThermoScientific™; 23225 ). Protein reduction, alkylation and digestion were performed with the ProteoSpin™ On-Column Proteolytic Digestion Kit (Norgen; 17500 ) according to manufacturer’s protocol. A nano-liquid chromatography (LC) column (Dionex ULTIMATE 3000) coupled online to a Q Exactive™ Plus Hybrid Quadrupole-Orbitrap™ Mass Spectrometer (Thermo Scientific™) was used for the MS analysis. Peptides were loaded for five technical replicates onto a 75 μm × 150 mm, 2 μm fused silica C18 capillary column, and mobile phase elution was performed using buffer A (0.1% formic acid in Milli-Q water) and buffer B (0.1% formic acid in 80% acetonitrile/Milli-Q water). The peptides were eluted using a gradient from 5% buffer B to 95% buffer B over 120 min at a flow rate of 0.3 μL/min. The LC eluent was directed to an ESI source for Orbitrap analysis. The MS was set to perform data dependent acquisition in the positive ion mode for a selected mass range of 375–2000 m/z for quantitative expression difference at the MS1 (140,000 resolution) level followed by peptide backbone fragmentation with normalized collision energy of 28 eV, and identification at the MS2 level (17,500 resolution). The *.RAW files were exported and processed in PEAKS AB 2.0 (Bioinformatics Solutions Inc.). The files were searched using target-decoy matching using the human UniProt database, with the false discovery rate set at 1%. Trypsin was indicated as the enzyme and up to two miscleavages were allowed. Carbamidomethylation was set as a fixed modification. Label-Free Quantification (LFQ) and Match Between Runs were used using default settings. PEAKS intensities were uploaded in MetaboAnalyst5.0, subsequently quantile normalized, log-transformed and autoscaled. An unpaired student T-test was used to compare the LFQ intensities between groups and those with p -values ≤ 0.05 were considered significant. The protein IDs with significant values were subjected to Ingenuity Pathway Analysis (IPA) and the String Database to identify affected canonical pathways and functional protein–protein interaction network. A selection of differentially expressed proteins was ultimately confirmed with immunoblotting as described above. Male C57BL/6JCr wild-type mice were purchased from Charles River at the age of 10 weeks with a body weight of 25 g. Animals were socially housed with a maximum of eight animals in standard mouse cages (22.5 cm × 16.7 cm × 14 cm) at constant humidity and temperature in a 12/12 h light–dark cycle. Food and water were available ad libitum. Cage enrichment was supplied by a platform, tunnel, and extra cotton sticks. Ex vivo experiments, such as immunohistochemistry (IHC) and co-immunoprecipitation (CoIP), were performed with cerebellar tissue at the age of 10 weeks. All conducted experiments were in compliance with the EU Directive 20,120/63/EU under ECD code 2022–59 after approval by the Animal Ethics Committee of the University of Antwerp. Male C57BL/6JCr wild-type mice were used for immunohistochemistry experiments at the age of 10 weeks. All animals were anesthetized by an intraperitoneal injection of 133.3 mg/kg Dolethal (Vetoquinol; BE-V171692 ), then transcardially perfused for four minutes with 0.1 M phosphate-buffered saline (PBS), subsequently for six minutes with ROTI®Histofix 4% paraformaldehyde solution (pH 7) (Carl Roth; 3105 ) using steady perfusion rate of 12 rpm (2 ml/min). Whole brains were removed from the skull and cut in half along the midline. The two hemispheres were placed in ROTI®Histofix 4% paraformaldehyde (pH 7) (Carl Roth; 3105 ) for two hours at room temperature, washed in PBS (0.01 M; pH 7.4) and transferred in 20% sucrose/PBS for overnight incubation at 4 °C. Tissue samples were embedded in PELCO® Cryo-Embedding Compound (Ted Pella, Inc.; 27300 ) and stored at − 80C. Tissue was cut in sections of approximately 10 µm thickness using the Leica CM1950 Cryostat Microtome (Leica Biosystems, Wetzlar, Germany) and transferred to VWR® SuperFrost® Plus, Adhesion Slides (VWR; 631-0108 ). The sections were washed three times using PBS. After blocking and permeabilization with PBS containing 0.05% thimerosal, 0.01% NaN 3 , 0.1% BSA, 1% Triton X-100 and 10% normal horse serum, sections were incubated overnight with primary ADNP antibody (Abcam; ab300114 ) or SIRT1 (Abcam; ab189494 ) antibody 1:500 diluted in the blocking/permeabilization buffer at room temperature. Tissue sections were washed six times with PBS, followed by a 4-h incubation with Cy3-conjugated Fab Fragment donkey anti-rabbit (Jackson ImmnoResearch Europe Ltd; 711-167-003 ) antibody 1:2000 diluted in PBS containing 0.05% thimerosal, 0.01% NaN3, 0.1% BSA and 10% normal horse serum. After six final washing steps in PBS, nuclei were stained with 5 µg/ml DAPI for 5 min, followed by three washes in PBS. Immunostained cryosections were mounted in Citifluor™ AF1 Mountant Solution (Electron Microscopy Sciences; 17,970–100 ). Confocal images were obtained using a Leica SP8 confocal scanning microscope (Leica-microsystems, Wetzlar, Germany) equipped with a 405-nm diode laser (to detect DAPI) and a white light laser (WLL) used at 555 nm to visualize Cy3. Images were acquired with a 20 × objective (HC PL APO 20x/0.75 IMM CORR CS2). Acquired images were analyzed in FIJI image analysis freeware . Nuclei were identified as DAPI + regions after automated thresholding of the smoothed DAPI channel (gaussion blur with kernel size 2). Proteins were extracted from the wild-type mouse cerebellum using N-PER™ Neuronal Protein Extraction Reagent (Thermo Scientific; 87,792 ), supplemented with 1 mg NAP/Davunetide (MedChemExpress; HY-105066 ) to enhance EB1/EB3 binding , and subjected to Co-IP analysis with the Pierce™ Co-Immunoprecipitation Kit (Thermo Scientific™; 26149 ) according to the manufacturer’s protocol. Briefly, 10 μg of antibodies of interest EB1 (Abcam; ab53358 ) and EB3 (Abcam; ab157217 ) were cross-linked to 50 μl of AminoLink Plus Coupling Resin. An amount of 1 mg of protein lysate was incubated overnight at 4 °C on an end-over-end shaker (VWR; 444-0503 ). Protein elution was performed in three steps: 10 μL, 35 μl, and 50 μl respectively. The immunoprecipitated materials were subsequently investigated by immunoblotting using the following primary antibodies (Additional file : Table S2): rat monoclonal EB1 (Abcam; ab53358 ), rabbit monoclonal EB3 (Abcam; ab157217 ), rabbit monoclonal ADNP antibody (Abcam; ab300114 ) and SIRT1 (Abcam; ab189494 ). In addition, Pierce™ Control Agarose Resin (crosslinked 4% beaded agarose) was used as negative control (IgG). Upon immunoblotting (see above) proteins were visualized using the SuperSignal™ West Femto Maximum Sensitivity Substrate (ThermoScientific™; 34094 ) after labeling with the appropriate secondary antibody (Agilent) in a 1:2000 dilution. Motif analysis of murine Adnp (UniProt; Q9Z103), Sirt1 (UniProt; Q53Z05), Eb1 (UniProt; Q61166), and Eb3 (UniProt; Q6PER3) was performed using Eukaryotic Linear Motif (ELM) ( http://elm.eu.org ). Three-dimensional models were either generated with AlfaFold ( https://alphafold.ebi.ac.uk ) or obtained from the AlfaFold database (see above) and used to predict protein–protein interactions between Adnp and Sirt1 with both Eb1 and Eb3 using the ClusPro server (ClusPro2.0)141-144 ( https://cluspro.org ), a widely-used protein–protein docking tool. The top 10 resulting motifs were superimposed in UCFS ChimeraX (version 1.6.1.) to present the most probable binding interaction with Adnp and Sirt1. Mitophagy-related gene signature was obtained by clustering analysis of RNA sequencing data from the ADNP brain autopsy and LCLs of ADNP patients and control lines upon gene set enrichment analysis with a customized gene toolbox in the Omics playground v2.8.22 (Additional file : Table S5). We confirmed the expression of mitophagy- and mitochondrial-related genes (Additional file : Table S4) using RT-PCR as described above. The autophagy flux was determined in ADNP patient and control LCLs by treatment with 160 nM of bafilomycin A1 (Santa Cruz Biotechnology, sc-2021550) for 2 h. Untreated cells and bafilomycin A1-treated cells were collected by centrifugation, and subjected to western blotting as described above. We detected expression of autophagy markers anti-p62/SQSTM1 (Abcam; ab56416 ) with an 1:2000 dilution and anti-LC3 (Abcam; ab192890 ) with an 1:1000 dilution in untreated and treated conditions to assess the autophagy flux. All western blots were controlled by GAPDH incubation (Cell signaling technology; 4317 ). Image Quantification was performed using ImageJ software. Graphical representation was performed in GraphPad Prism version 9.3.1 using a 2-way ANOVA with Šídák's multiple comparisons test. Intact fibroblasts of control and ADNP patients (n = 2) were seeded at a density of 4 × 10 6 cells in a 6-well plate for live cell imaging and subsequently stained with 250 nM fluorescent probe MitoTracker® Red CM-H2XRos (Invitrogen; M7513 ) according to the manufacturer’s protocol to assess mitochondrial redox state and subcellular localization. The redox state of mitochondria is determined by the levels of NAD + /NADH, FAD/FADH2, NADP + /NADPH, glutathione/glutathione disulfide (GSH/GSSG) and reactive oxygen species (ROS), which reflect mitochondrial metabolic activity and overall fitness. If the electron transport chain is compromised or there is an imbalance in the redox state, leading to increased ROS production. When CM-H2XRos enters the mitochondria, they are oxidized depending on the relative amount of reactive oxygen species (ROS) present in the mitochondria. The oxidation leads to a change in the fluorescence properties of the red dye. The emitted red fluorescence signal was measured with a multimode microplate reader (Tecan Spark™) and analyzed using the Spark Control™ V3.2 application. The fluorescent signal was statistically quantified using an unpaired Student T-test assuming equal variances in GraphPad Prism 9.3.1. Additionally, fibroblasts were imaged with the Olympus CKX53 fluorescence microscope (Olympus, Antwerp, Belgium) to visualize subcellular localization of the mitochondria with better resolution. Total DNA was isolated from ADNP LCLs and skin fibroblasts as compared to their controls using the DNeasy Blood and Tissue Kit (Qiagen; 69504 ) according to the manufacturer’s instructions. Mitochondrial DNA copy number (mtDNA-CN) was determined using RT-PCR. Briefly,the cycle threshold (Ct) value of a mitochondrial-specific ( tRNAleu ) and nuclear-specific ( B2M ) target were determined in triplicate for each sample using the following primers: tRNA LEU -Fwd: 5′-CACCCAAGAACAGGGTTTGT-3′ and tRNA LEU -Rev: 5′-TGGCCATGGGTATGTTGTTA-3′ and B2M-Fwd: 5′-TGCTGTCTCCATGTTTGATGTATCT-3′ and B2M-Rev: 5′-TCTCTGCTCCCCACCTCTAGGT-3′. The difference in Ct-values (ΔCt) for each replicate represents a raw relative measure of mtDNA-CN. ADNP patient and unrelated sex- and age-matched control fibroblasts (n = 2) were cultured on Seahorse XFp miniplates with a density of 4 × 10 4 cells (Agilent Technologies; 103725-100 ) and incubated overnight at 37% O 2 /5% CO 2 . Prior to the Seahorse XS Cell Mito Stress Test assay, the fibroblast medium was replaced with Seahorse XF RPMI medium (pH 7.4) (Agilent Technologies; 103576-100 ), supplemented by 1.0 M Seahorse XF Glucose Solution (Agilent Technologies; 103577-100 ), 100 mM Seahorse XF Pyruvate Solution (Agilent Technologies; 103578-100 ), and 200 mM Seahorse XF L-Glutamine Solution (Agilent Technologies; 103579-100 ). The drug ports of the sensor cartridge were loaded with 1 µM Oligomycin (port A), 0.7 µM Carbonyl cyanide-4- (trifluoromethoxy)phenylhydrazone (FCCP) (port B), and 0.5 µM Rotenone/Antimycin A (Rot/AA) (port C). Next, cells seeded in the Seahorse XF HS Miniplates, together with the sensor cartridge, were loaded into the Seahorse XF HS Mini Analyzer (Agilent; S7852A) and subjected to the Agilent Cell Mito Stress Test assay (Agilent; 103010-100) to determine the real-time oxygen consumption rate (OCR) for 1.5 h. First, the baseline respiration was measured (basal OCR) prior to mitochondrial perturbation by sequential injection of 1.5 µM oligomycin (a complex V inhibitor to decrease the electron flow through electron transport chain (ETC)); 3 µM FCCP (the uncoupling agent to promote maximum electron flow through ETC), and a mixture of 0.5 µM Rotenone/Antimycin A (complex I and complex II inhibitors, respectively, to shut down the mitochondria-related respiration). All compounds were included in the Seahorse XFp Cell Mito Stress Test Kit (Agilent; 103010-100 ). The data was analyzed using Agilent Seahorse analytics (Agilent Seahorse Analytic). Statistical analysis was performed in GraphPad Prism 9.3.1 using an unpaired student T-test assuming equal variances. Clinical presentation The patient was born prematurely, at 32 weeks of gestational age, from healthy, non-consanguineous parents. His birth weight was 1790 g, the Apgar score was 10/10. An intracranial hemorrhage grade III was diagnosed. Clinical reports showed that the patient presented with motor delays, developmental delays, autism spectrum disorder, hypotonia, and small genitalia. His parents also reported visual impairments, feeding and eating problems, as well as sleep disorders. Phenotypically, the patient presented with a prominent forehead and eyelashes, downward slanting eyes, malformed ears, wide nasal bridge, broad and long philtrum, large mouth with thick lower vermillion, pointed chin and widely spaced teeth (Fig. A, B), all well-defined characteristics described in a cohort of 78 Helsmoortel–Van der Aa patients (Additional file : Table S6). At the age of 2.5 years, he developed an upper respiratory tract infection complicated with hepatitis and seizures. He was transferred to ICU where supportive treatment and plasmapheresis were started. Liver biopsy showed extensive necrosis of parenchyma and moderate cholestasis. MRI showed diffuse cortical atrophy of the brain parenchyma, marked reduction in volume of white matter as well as gliosis in both frontal and temporoparietal lobes that could indicate the sequelae of acute hepatic encephalopathy. He developed refractory generalized epilepsy and received a combination treatment of antiepileptic drugs, e.g., carbamazepine, oxcarbamazepine, levetiracetam, clonazepam, clobazam and topiramate. During his lifespan, he underwent two liver transplantations and received immunosuppressants. Following the second liver transplant, at the age of six years and three months old, the child passed away because of multiple organ failure. An autopsy was performed, and various tissue samples were donated with informed consent. Molecular testing had indicated that the patient was negative for any inheritable metabolic disorders. Whole-exome sequencing (WES) of the patient’s blood revealed a heterozygous de novo duplication of adenine at position 1676 in the ADNP gene at position chr20:50,893,037-50,893,039 (RefSeq isoform ENST00000621696.5 Human GRCh38/hg38). The mutation was confirmed by Sanger sequencing (Fig. C). It converts the histidine (His) residue at position 559 to glutamic acid (Gln), leading to a frameshift mutation with a premature stop codon two amino acids downstream (Fig. D, E). Cerebellar tissue, known for its highest ADNP expression , allowed to validate the presence of ADNP mRNA and protein in autopsy material by performing an expression analysis using real time reverse-transcription PCR (RT-PCR) and Western blotting. To investigate wild-type ADNP mRNA levels, we designed a primer set at the 3’ region of exon 6 (corresponding to the C-terminal portion of the protein). Here, a significant two-fold increase in the total ADNP levels was observed in the patient compared to the control subject ( p = 0.0001; ***), consistent with findings in our RNA sequencing described below (Fig. A). Attempts to quantify the 5’ end of the transcript were not successful, suggesting partial mRNA degradation. At the protein level, we tested endogenous ADNP levels in the human brain using extensively validated C-terminal and N-terminal ADNP antibodies . We were able to detect wild-type ADNP levels (150 kDa) in the control brain, but not in the patient using both antibodies (Fig. B, C). To investigate the co-expression of the full length and mutant protein, we co-transfected wild-type and p.His559Glnfs*3 mutant N-DYKDDDDK (Flag®) expression vectors in HEK293T cells. Co-expression of wild-type and mutant ADNP demonstrated the presence of the wild-type protein (150 kDa) together with a truncated mutant protein (63 kDa) using an N-terminal antibody, mimicking the expected expression in the patient. C-terminal antibody incubation resulted in the detection of the wild-type ADNP (150 kDa) exclusively. Together, these findings confirm a molecular weight of ADNP (150 kDa), above its calculated molecular weight of 123 kDa, but show instability of the protein in post-mortem brain material of the patient. To study the molecular impact of the patient mutation, we performed in silico modeling of the wild-type ADNP protein (UniProt; Q9H2P0) and p.His559Glnfs*3 mutant using AlfaFold. Here, the structure of the wild-type protein demonstrated the DNA-binding homeobox domain in proximity to the bipartite NLS sequence, whereas the neuroprotective NAP motif resides at the surface of the protein, being partially occluded by flexible intrinsically disordered regions (IDRs) and low-complexity regions (LCRs) located near the C-terminus, suggestive for a role for protein–protein interactions . Moreover, the eIF-4E binding motifs and the glutaredoxin active site are centrally positioned in the core of the wild-type protein, assembling several of its zinc finger motifs (Fig. A). The p.His559Glnfs*3 mutant truncates the NLS region, impairing nuclear transport . Moreover, downstream protein domains, including the DNA-binding homeodomain and the HP1 binding motif are also lost as a result of the truncating mutation. Overall, the p.His559Gln*3 mutant lacks some of the IDRs but has a similar structural confirmation compared to the wild-type protein (Fig. B). Subsequently, we examined stable ADNP protein levels in several subcellular compartments including the cytoplasm, nucleus with chromatin-enriched proteins, and the cytoskeleton in HEK293T overexpression lysates. In the cytoplasm, we detected wild-type (150 kDa) and mutant (63 kDa) ADNP using an N-terminal antibody showing no significant difference in expression levels ( p = 0.71; ns). In the chromatin-bound fraction, we visualized the wild-type and mutant protein with a significant decrease of mutant protein levels ( p = 0.03; *). Moreover, we demonstrated the expression of mutant and wild-type ADNP in the cytoskeletal protein fraction. However, we did not observe a significant difference ( p = 0.42; ns) in the expression of the mutant compared to the wild-type protein (Fig. C). Genome-wide methylation analysis of the cerebellum demonstrates abnormalities of the cytoskeleton and autophagy together with an aberrant transcription factor function of ADNP during development As methylation signatures are robust and even conserved in ancient DNA , we decided to start our exploration by performing an EPIC BeadChip array on the cerebellum of the died ADNP patient and an age-matched control brain. Here, we show enrichment of 6289 CpG probes with a minimum 20% difference in methylation in the ADNP patient compared to the control. Specifically, we identified 2394 CpG probes showing hypermethylation (Δβ > 0.2), whereas a vast amount of 3895 CpG probes were hypomethylated (Δβ < −0.2). In addition, 1547 hypermethylated gene probes could be annotated to 1162 genes, while 2500 hypomethylated gene probes were associated with 1842 genes (Additional file : Data S1), indicating a Class I episignature , extending findings from peripheral blood to the human brain for the first time (Fig. A). Next, we confirmed a selection on genes prioritized for methylation in the 5’UTR, 3’ UTR and transcription start site (TSS) together with associations to autism or other Helsmoortel–Van der Aa syndrome-related clinical features. We selected the hypermethylated genes OTX2, SLC25A21, and DNAJ6 and the hypomethylated genes COL4A2, MAGI2, and CTNND2 for pyrosequencing. Here, we could confirm a higher percentage of CpG methylation in the patient for OTX2 (56%), SLC25A21 (86%), and DNAJ6 (85%) compared to the control subject. Respectively, we could also demonstrate a lower percentage of CpG methylation in the patient for COL4A2 (1%), MAGI2 (2%), and CTNND2 (3%) (Fig. B). Next, we performed functional annotation of the hyper- and hypomethylated genes using Metascape. Enriched biological processes and GO terms included actin filament-based processes, cell adhesion, nervous system development, muscle contraction, brain development, the WNT signaling pathway, regulation of membrane potential, and synaptic transmission amongst others (Fig. C). Functional enrichment analysis for protein–protein interactions was predicted for ADNP using the STRING database. We identified four suggested interactions of ADNP with WDFY3, UBR5, FAT1, and NFIA, which play a role in autophagy of the mitochondria, protein ubiquitination, macro-autophagy, autophagosome and autolysosome formation (Fig. D). Given the role of Adnp as a putative transcription factor , we performed a transcription factor enrichment of both hyper- and hypomethylated genes. Here, we identified a module of 44 co-expressed genes, which were subsequently inserted in CytoScape using the IRegulon function for TF enrichment (Additional file : Data S2). We observe a stronger enrichment of TFs associated with hypomethylated genes (red) than hypermethylated genes (blue) and shared TFs (green). Among the upregulated TFs associated with hypomethylated genes presented pluripotency and cell fate-determining genes such as POU2F1 , TEAD2 , SOX1/4 , GATA1/2/3/5/6, PAX4/6, NANOG, and NEUROD1 , as well as chromatin modifiers like YY1, SIN3A and ADNP itself. On the other hand, the downregulated TF cluster associated with hypermethylated genes was also enriched for PAX and SOX -related genes, indicating abnormal lineage specification of neuronal progenitor cells. The shared TF cluster showed presence of HNF1A , a gene controlling expression of several liver-specific genes (Fig. E). Our genome-wide cerebellar methylation analysis indicates strong molecular evidence for a deregulated function of ADNP as a transcription factor, impacting lineage specification and genes implicated in brain development. RNA sequencing substantiates downregulation of the WNT signaling pathway and autophagy defects in cerebellar autopsy tissue To determine differential expression beyond methylation differences, we performed bulk transcriptome sequencing of cerebellar tissue of the ADNP autopsy. As RNA is much less stable over time, we first performed an extensive quality control by evaluating total RNA purity and integrity (see experimental methods). Using bulk mRNA sequencing, we determined the gene ratio (patient/control) using the NOISeq algorithm, a non-parametric method for comparing samples without biological replicates, reporting the log2-ratio of the two conditions (M) and the value of the difference between the conditions (D) . We tested for differential expression across all 7659 genes that appeared in our data set (Additional file : Data S3). In line with the observation of an excess of hypomethylated CpG probes, we observed an excess of upregulated genes. Using a significance cut-off equivalent to, p value < 0.05, FDR = < 0.05, and a biologically meaningful (M-value) log2FC > 0.5, we found 514 downregulated and 1520 upregulated genes with differential expression (Fig. A). Gene expression alterations in the ADNP cerebellum were notable with the majority of genes presenting with an M-value < 5. Gene ontology (GO) enrichment revealed downregulation of glutamatergic synaptic transmission, abnormal cardiac muscle cell conductivity, and nervous system development, whereas cytoskeleton dynamics were upregulated. A remarked enrichment of immune system-related responses was observed that are potentially related to the patient’s immunosuppressant treatment (Fig. B). We confirmed a selected set of genes with RT-PCR, including the RNA-methylation gene METTL3 ( p = 0.005; **), autophagy inducer BECN1 ( p < 0.0001; ****), and WNT signaling ligand CTNNB1 ( p = 0.001; **) (Fig. C). To better interpret the differential expression in the ADNP brain, we compared the transcriptome analysis of the autopsy with the differential expression observed in immortalized LCLs of multiple patients with different ADNP mutations. We tested for differential expression across approximately 10,000 protein-coding transcripts that appeared in our data set (Additional file : Data S4). Using the exact cut-off criteria as in the autopsy, we found 1730 downregulated and 3278 upregulated genes with differential expression, indicating that the ADNP mutations rather induce gene upregulation (Fig. D). Fast Gene Set Enrichment Analysis (fgsea) identified similar molecular pathways as identified in the autopsy (Fig. E). We confirmed a subset of five genes with RT-PCR, including the heterochromatin marker and ADNP-interacting gene CBX3 ( p = 0.01; *), WNT signaling member WNT10A ( p = 0.003; **), actin-cadherin mediator CTNNAL1 ( p = 0.003; **) as well as nonsense mediated decay members SMG5 (p = 0.0002; ***) and UPF3B ( p = 0.005; **) (Fig. F). To investigate the potential impact of the ADNP mutation in the human brain, we intersected the DEGs identified in both data sets (Additional file : Data S5), which revealed an overlap of 241 genes between the ADNP autopsy brain and LCLs (Fig. A). We observed a striking resemblance for biological relevance of genes involved in endoderm specification IGFBP2 (brain, * p = 0.03; LCL, * p = 0.04), canonical WNT signaling WNT2 (brain, * p = 0.01; LCL, ** p = 0.01), mitochondrial transporter SLC25A25 (brain, * p = 0.02; LCL, * p = 0.03), autophagy regulation RUBCN (brain, **** p < 0.0001; LCL, * p = 0.003), hematopoietic stem cell differentiation RUNX1 (brain, ** p = 0.001; LCL, *** p = 0.001), N 6 -adenosine-methylation METTL3 (brain, ** p = 0.005; LCL, **** p < 0.0001), and bone and teeth development BMP6 (brain, ** p = 0.002; LCL, * p = 0.04) (Fig. B, C). In conclusion, these robust gene expression changes related to the nervous system and morphogenesis underline a regulating role of ADNP in the human brain and blood of patients, confirmed by salient pathways including the WNT signaling, autophagy, and bone development together with involvement in processes such as hematopoietic stem cell differentiation and unexpected RNA methylation. Shotgun proteomics links chromatin remodeling to autophagy in the ADNP autopsy brain As post-transcriptional regulation can further increase variation in gene expression levels , proteome analysis was performed by label-free quantitation (LFQ) mass spectrometry on the cerebellum to study the effect of the c.1676duplA/p.His559Glnfs*3 ADNP mutation at protein expression level. Chromatographic conditions between different runs were highly reproducible, resulting in a strong correlation between LFQ intensities and technical replicates (Additional file : Data S6). Overall, we detected approximately 1522 protein groups per sample under a 1% false-discovery rate (FDR) with fixed modifications of carbamidomethylation (C), deamidation (QN) and oxidation (M). Moreover, we identified 4552 proteins with more than two unique peptides, respectively 988 proteins with at least two unique peptides, and 1477 with one unique peptide. Next, we used MetaboAnalyst 5.0 to quantify differences detected in patient versus control cerebellum. Among the 2455 quality-filtered proteins, we detected 492 proteins with a differential expression (Additional file : Data S7), of which 224 proteins were significantly downregulated, while 268 proteins showed a significant upregulation in the post-mortem patient cerebellum (two-tailed student T-test; padj < 0.05). Next, we plotted the top 10 downregulated (represented in blue) and upregulated (represented in red) proteins identified in patient versus control brain, showing a clear upregulation of the major ADNP-interacting protein heterochromatin Protein 1 homolog beta (CBX1/HP1β), amongst others, indicating that ADNP is able to somehow affect the expression of its direct interaction partner (Fig. A). Subsequently, we applied immunoblot experiments to confirm the downregulation of β-catenin and BECN1 protein levels in the patient brain, in line with its decreased transcription levels. Surprisingly, we also observed differential expression of an additional autophagy marker, MAP1LC3A, in the ADNP brain consistent with aberrant autophagy defects in our transcriptome data (Fig. B). Clustering of the differentially expressed proteins (DEPs) in canonical pathways using IPA indicated a decreased activity of mitochondrial oxidative phosphorylation, sirtuin signaling and RhoA signaling. In contrast, IPA predicted an increase in EIF2 signaling, spliceosomal cycle and protein kinase A signaling in the patient. We also observed an enrichment of pathways with no difference in activity, including granzyme A signaling and mTOR signaling, T-helper signaling, and apoptosis (Fig. C). Next, we mapped all DEPs in a functional enrichment analysis and predicted possible protein–protein interactions of ADNP with the identified DEPs as well as with other biologically correlated proteins. Of particular interest, the histone deacetylase sirtuin 1 (SIRT1) in the center of the protein network was found to link various chromatin modifier proteins such as MECP2, ADNP, SMARCC2, HDAC2 including YY1, and chromatin-associating proteins such as CBX1/3, histones H1F0 and H1.2 to autophagy regulators like MAP1LC3A and LAMP1 (Fig. D). In this section, we showed that the proteomic landscape of ADNP brain autopsy material corroborates our transcriptome findings, e.g., upregulation of ADNP-interactor CBX1/HP1β together with a downregulation of β-catenin and BECN1, supported by abnormalities attributed to the WNT signaling pathway and autophagy. ADNP and SIRT1 co-immunoprecipitate with the microtubule end-binding proteins 1 and 3 (EB1/EB3) Recently, various studies identified an association between mitochondrial dysfunction, autophagy regulation, and autism spectrum disorders . Similarly, our proteomic protein–protein interaction study mapped SIRT1 at the crossroads of chromatin remodelers and autophagic regulators in the ADNP autopsy brain. Besides, SIRT has been discovered to maintain genomic stability , to enhance synaptic plasticity , to suppress inflammation , to fulfill a neuroprotective function , and to positively regulate autophagy and mitochondrial function . In addition, SIRT1 is known to modulate chromatin structure by activating BRG1, which is a chromatin remodeling interaction partner of ADNP in the SWI/SNF complex . Hence, we reasoned that ADNP and SIRT1 may share common regulatory partners in chromatin remodeling and microtube dynamics that regulate autophagy. To further validate a direct protein interaction of ADNP and SIRT1 in the human brain, co-immunoprecipitation (co-IP) experiments were performed. However, due to the instability of the ADNP protein, co-IP experiments were not successful. Therefore, we alternatively demonstrated the subcellular localization of Adnp and Sirt1 by immunostaining the cerebellum dissected from male C57BL/6JCr wild-type mice as a model for the human condition. Here, cerebellar cryosections were immunostained with primary monoclonal ADNP and SIRT1 antibodies (Cy3 red fluorescent signal), and nuclei were counterstained with DAPI (blue). Adnp expression was predominantly detected in the nucleus, with occasional weak cytoplasmic signals, visualized by the overlap of the red Adnp signal and blue DAPI counterstaining. In contrast, Sirt1 was predominantly situated in the cytoplasm of Purkinje cells in the cerebellum, with occasional nuclear immunoreactivity (Fig. A). Indeed, an indirect interaction of ADNP and SIRT1 was shown in SH-SY5Y cells which could not be validated in human induced pluripotent stem cells (hIPSC)-derived neuronal differentiated cells . Therefore, we next investigated the potential indirect interaction of Adnp and Sirt1 through the EB1/EB3 proteins in murine cerebellar brain lysates with a co-immunoprecipitation assay. During this process, we performed stringent washing steps using high detergent buffers to prevent false-positive binding. In addition, we also controlled each western blot with GAPDH, whose intensity was absent after co-immunoprecipitation of the bait protein. We observed specific co-immunoprecipitation of Adnp (150 kDa) and Sirt1 (100 kDa) in the presence of both EB1 (30 kDa) and EB3 (32 kDa) antibodies. IgG non-reactive beads were used as a negative control, showing no immunoreactivity of Adnp, Sirt1 together with EB1 and EB3 in the eluted fraction (Fig. B). To better understand the physical connections between Adnp (UniProt; Q9Z103) and Sirt1 (UniProt; Q53Z05), we applied a eukaryotic linear motif (ELM) analysis to unravel shared motifs (Fig. C). Interestingly, as partially shown before, we identified a series of common interaction motifs, including (1) SxIP motif for Adnp (aa 354–360, NAPVSIP, p = 0.01) and similar SSIP for Sirt1 (aa 440–448, VALIPSSIP, p = 0.0002), (2) SH3-domains for Adnp (i.e., aa189–195, FQHVAAP, p = 0.01) and Sirt1 (i.e., aa 506–511, PPRPQK, p = 0.001), and (3) 14–3–3 motifs for Adnp (i.e., aa 16–20, RKTVK, p = 0.004) and Sirt1 (i.e., aa 333–342, RNYTQNIDTL, p = 0.004). The presence of a SxIP motif is a unique feature ascribed to both ADNP and SIRT1, as only 42 protein have been identified by mass-spectrometry based methods to contain this conserved motif . Original studies have identified the microtubule-end binding proteins EB1 and EB3 as interaction partners of ADNP through its SxIP motif . Along the same line, we predicted a physical interaction between Adnp, Sirt1 and EB1/EB3 in silico via 3D-molecular docking. Upon ranking the models according to the amount of interacting amino acids, the top 10 docking interactions were derived from ClusPro and processed in ChimeraX. Molecular docking revealed possible Adnp (blue) binding to both microtubule-end binding proteins EB1 (left, violet) and EB3 (right, pink) via amino acids 358–360, corresponding to its SxIP motif. In addition, Sirt1 was predicted to interact with EB1 (left, violet) and EB3 (right, pink) proteins through its similar SSIP motif at amino acid position 446–448 (Fig. D). In conclusion, our findings suggest that Adnp and Sirt1 might indirectly co-immunoprecipitate in the presence of the EB1/EB3 proteins via the SxIP motif for ADNP, respectively SSIP for SIRT1. The ADNP-EB1/EB3-SIRT1 complex regulates mitochondrial autophagy (mitophagy) and respiratory functions RNA sequencing data of the ADNP brain autopsy was subsequently analyzed for the enrichment of specialized pathways associated with autophagy , neuroprotection , and mitochondrial biogenesis with a costomized gene toolbox . By this approach, we found enrichment of genes involved in mitophagy (Additional file : Table S5), an autophagy-dependent process regulating mitochondrial homeostasis . Similarly, RNA sequencing data of the ADNP patient LCLs also revealed enrichment of mitophagy-related genes e.g., ubiquitin Specific Peptidase 15 ( USP15 ), mitochondrial distribution and morphology regulator 1 ( MSTO1 ), mitochondrial fission regulator 2 ( MTFR2 ), apoptosis inducing factor mitochondria associated 1 ( AIFM1 ), Mitofusin 2 ( MFN2 ), Beclin 1 ( BECN1 ), and mitochondrial Elongation Factor 1 ( MIEF1 ) (Fig. A). RNAseq results of nine DEGs were further validated by RT-PCR, including MFN2 ( p < 0.0001; ****), MAPK1 ( p = 0.01; *), BECN1 ( p < 0.006; **), MCL1 ( p = 0.001; **), USP15 ( p = 0.0002; ***), USP8 ( p = 0.02; *), TBK1 ( p = 0.03; *), UBE2N ( p = 0.004; **), and MTFR2 ( p = 0.002; **) (Fig. B). Since we obtained two reciprocally regulated mitophagy gene clusters in the human brain (Additional file : Data S8), we next determined the autophagic flux by Bafilomycin A1 treatment in LCLs derived from ADNP patients and sex-age-matched control subjects followed by western blot detection of p62 and LC3 (Additional file : Data S9). Prior to Bafilomycin A1 treatment, we detected a non-significant increase in levels of p62/SQSTM1 ( p = 0.38; ns) and LC3 expression ( p > 0.99; ns) in patient-derived LCLs. Following Bafilomycin A1 treatment, the expression of p62 ( p = 0.18; ns) increased in the ADNP deficient cell lines, although non-significant. However, the levels of LC3 ( p < 0.0001; ****) showed a remarkable increase in the ADNP-patient cell lines, confirming an increased autophagic flux. Given the mitophagy gene signature in LCLs from ADNP patients together with downregulated mitochondrial protein functions identified by LFQ-MS in the human ADNP autopsy brain, we subsequently investigated relative changes in mitochondrial activity and subcellular localization in patient-derived ADNP fibroblasts compared to two unaffected controls using the MitoTracker® Red CM-H2XRos probes. Here, we observed a rather faint fluorescence intensity in the ADNP patient fibroblasts compared to the controls. Furthermore, we did not observed a difference in the subcellular localization of the mitochondria in patient fibroblasts as compared to the control subjects (Fig. C). Relative MitoTracker® Red CM-H2XRos probe fluorescence per cell, indicative of mitochondrial redox activity, was quantified in patient and control fibroblasts using the Tecan Spark™ and normalized to the brightfield cell count. The mitochondria of patient-derived skin fibroblasts showed a remarkable decrease ( p = 0.01; *) in fluorescent intensity compared to the control cells, indicating aberrant mitochondrial activity (Fig. D). To rule out the possible decrease in the mitochondrial copy number in ADNP LCLs and skin fibroblasts, we determined the mtDNA/nDNA ratio (tRNAleu/B2M) by RT-PCR. Since we could not detect a significant difference in the number of mitochondria in ADNP cell lines compared to controls (Additional file : Data S10), we next investigated mitochondrial respiration using the Seahorse analyzer and further addressed the observed mitochondrial dysfunction in patient-derived (blue) and control (red) fibroblasts using the Cell Mito Stress Test to measure changes in oxygen consumption rate (OCR) before/after administration of specific compounds that sequentially affect the different complexes of the mitochondrial respiratory chain. The changes in OCR allowed quantification of several aspects of the mitochondrial respiration. Measurements of the basal respiration showed a significant decrease ( p = 0.04; *) in the patient fibroblasts compared to the control lines, confirming a reduced activity measured in the fluorescent mitochondrial staining. Besides, we also observed decreased values of proton leak ( p = 0.21; ns), ATP-linked respiration ( p = 0.06; ns) and maximal respiratory capacity ( p = 0.28; ns) in the ADNP cell lines, although the difference was not significant. Lastly, we observed no difference in the spare respiratory capacity ( p = 0.84; ns) and non-mitochondrial respiration ( p = 0.79; ns) in patient or control lines (Fig. E, F). The patient was born prematurely, at 32 weeks of gestational age, from healthy, non-consanguineous parents. His birth weight was 1790 g, the Apgar score was 10/10. An intracranial hemorrhage grade III was diagnosed. Clinical reports showed that the patient presented with motor delays, developmental delays, autism spectrum disorder, hypotonia, and small genitalia. His parents also reported visual impairments, feeding and eating problems, as well as sleep disorders. Phenotypically, the patient presented with a prominent forehead and eyelashes, downward slanting eyes, malformed ears, wide nasal bridge, broad and long philtrum, large mouth with thick lower vermillion, pointed chin and widely spaced teeth (Fig. A, B), all well-defined characteristics described in a cohort of 78 Helsmoortel–Van der Aa patients (Additional file : Table S6). At the age of 2.5 years, he developed an upper respiratory tract infection complicated with hepatitis and seizures. He was transferred to ICU where supportive treatment and plasmapheresis were started. Liver biopsy showed extensive necrosis of parenchyma and moderate cholestasis. MRI showed diffuse cortical atrophy of the brain parenchyma, marked reduction in volume of white matter as well as gliosis in both frontal and temporoparietal lobes that could indicate the sequelae of acute hepatic encephalopathy. He developed refractory generalized epilepsy and received a combination treatment of antiepileptic drugs, e.g., carbamazepine, oxcarbamazepine, levetiracetam, clonazepam, clobazam and topiramate. During his lifespan, he underwent two liver transplantations and received immunosuppressants. Following the second liver transplant, at the age of six years and three months old, the child passed away because of multiple organ failure. An autopsy was performed, and various tissue samples were donated with informed consent. Molecular testing had indicated that the patient was negative for any inheritable metabolic disorders. Whole-exome sequencing (WES) of the patient’s blood revealed a heterozygous de novo duplication of adenine at position 1676 in the ADNP gene at position chr20:50,893,037-50,893,039 (RefSeq isoform ENST00000621696.5 Human GRCh38/hg38). The mutation was confirmed by Sanger sequencing (Fig. C). It converts the histidine (His) residue at position 559 to glutamic acid (Gln), leading to a frameshift mutation with a premature stop codon two amino acids downstream (Fig. D, E). Cerebellar tissue, known for its highest ADNP expression , allowed to validate the presence of ADNP mRNA and protein in autopsy material by performing an expression analysis using real time reverse-transcription PCR (RT-PCR) and Western blotting. To investigate wild-type ADNP mRNA levels, we designed a primer set at the 3’ region of exon 6 (corresponding to the C-terminal portion of the protein). Here, a significant two-fold increase in the total ADNP levels was observed in the patient compared to the control subject ( p = 0.0001; ***), consistent with findings in our RNA sequencing described below (Fig. A). Attempts to quantify the 5’ end of the transcript were not successful, suggesting partial mRNA degradation. At the protein level, we tested endogenous ADNP levels in the human brain using extensively validated C-terminal and N-terminal ADNP antibodies . We were able to detect wild-type ADNP levels (150 kDa) in the control brain, but not in the patient using both antibodies (Fig. B, C). To investigate the co-expression of the full length and mutant protein, we co-transfected wild-type and p.His559Glnfs*3 mutant N-DYKDDDDK (Flag®) expression vectors in HEK293T cells. Co-expression of wild-type and mutant ADNP demonstrated the presence of the wild-type protein (150 kDa) together with a truncated mutant protein (63 kDa) using an N-terminal antibody, mimicking the expected expression in the patient. C-terminal antibody incubation resulted in the detection of the wild-type ADNP (150 kDa) exclusively. Together, these findings confirm a molecular weight of ADNP (150 kDa), above its calculated molecular weight of 123 kDa, but show instability of the protein in post-mortem brain material of the patient. To study the molecular impact of the patient mutation, we performed in silico modeling of the wild-type ADNP protein (UniProt; Q9H2P0) and p.His559Glnfs*3 mutant using AlfaFold. Here, the structure of the wild-type protein demonstrated the DNA-binding homeobox domain in proximity to the bipartite NLS sequence, whereas the neuroprotective NAP motif resides at the surface of the protein, being partially occluded by flexible intrinsically disordered regions (IDRs) and low-complexity regions (LCRs) located near the C-terminus, suggestive for a role for protein–protein interactions . Moreover, the eIF-4E binding motifs and the glutaredoxin active site are centrally positioned in the core of the wild-type protein, assembling several of its zinc finger motifs (Fig. A). The p.His559Glnfs*3 mutant truncates the NLS region, impairing nuclear transport . Moreover, downstream protein domains, including the DNA-binding homeodomain and the HP1 binding motif are also lost as a result of the truncating mutation. Overall, the p.His559Gln*3 mutant lacks some of the IDRs but has a similar structural confirmation compared to the wild-type protein (Fig. B). Subsequently, we examined stable ADNP protein levels in several subcellular compartments including the cytoplasm, nucleus with chromatin-enriched proteins, and the cytoskeleton in HEK293T overexpression lysates. In the cytoplasm, we detected wild-type (150 kDa) and mutant (63 kDa) ADNP using an N-terminal antibody showing no significant difference in expression levels ( p = 0.71; ns). In the chromatin-bound fraction, we visualized the wild-type and mutant protein with a significant decrease of mutant protein levels ( p = 0.03; *). Moreover, we demonstrated the expression of mutant and wild-type ADNP in the cytoskeletal protein fraction. However, we did not observe a significant difference ( p = 0.42; ns) in the expression of the mutant compared to the wild-type protein (Fig. C). As methylation signatures are robust and even conserved in ancient DNA , we decided to start our exploration by performing an EPIC BeadChip array on the cerebellum of the died ADNP patient and an age-matched control brain. Here, we show enrichment of 6289 CpG probes with a minimum 20% difference in methylation in the ADNP patient compared to the control. Specifically, we identified 2394 CpG probes showing hypermethylation (Δβ > 0.2), whereas a vast amount of 3895 CpG probes were hypomethylated (Δβ < −0.2). In addition, 1547 hypermethylated gene probes could be annotated to 1162 genes, while 2500 hypomethylated gene probes were associated with 1842 genes (Additional file : Data S1), indicating a Class I episignature , extending findings from peripheral blood to the human brain for the first time (Fig. A). Next, we confirmed a selection on genes prioritized for methylation in the 5’UTR, 3’ UTR and transcription start site (TSS) together with associations to autism or other Helsmoortel–Van der Aa syndrome-related clinical features. We selected the hypermethylated genes OTX2, SLC25A21, and DNAJ6 and the hypomethylated genes COL4A2, MAGI2, and CTNND2 for pyrosequencing. Here, we could confirm a higher percentage of CpG methylation in the patient for OTX2 (56%), SLC25A21 (86%), and DNAJ6 (85%) compared to the control subject. Respectively, we could also demonstrate a lower percentage of CpG methylation in the patient for COL4A2 (1%), MAGI2 (2%), and CTNND2 (3%) (Fig. B). Next, we performed functional annotation of the hyper- and hypomethylated genes using Metascape. Enriched biological processes and GO terms included actin filament-based processes, cell adhesion, nervous system development, muscle contraction, brain development, the WNT signaling pathway, regulation of membrane potential, and synaptic transmission amongst others (Fig. C). Functional enrichment analysis for protein–protein interactions was predicted for ADNP using the STRING database. We identified four suggested interactions of ADNP with WDFY3, UBR5, FAT1, and NFIA, which play a role in autophagy of the mitochondria, protein ubiquitination, macro-autophagy, autophagosome and autolysosome formation (Fig. D). Given the role of Adnp as a putative transcription factor , we performed a transcription factor enrichment of both hyper- and hypomethylated genes. Here, we identified a module of 44 co-expressed genes, which were subsequently inserted in CytoScape using the IRegulon function for TF enrichment (Additional file : Data S2). We observe a stronger enrichment of TFs associated with hypomethylated genes (red) than hypermethylated genes (blue) and shared TFs (green). Among the upregulated TFs associated with hypomethylated genes presented pluripotency and cell fate-determining genes such as POU2F1 , TEAD2 , SOX1/4 , GATA1/2/3/5/6, PAX4/6, NANOG, and NEUROD1 , as well as chromatin modifiers like YY1, SIN3A and ADNP itself. On the other hand, the downregulated TF cluster associated with hypermethylated genes was also enriched for PAX and SOX -related genes, indicating abnormal lineage specification of neuronal progenitor cells. The shared TF cluster showed presence of HNF1A , a gene controlling expression of several liver-specific genes (Fig. E). Our genome-wide cerebellar methylation analysis indicates strong molecular evidence for a deregulated function of ADNP as a transcription factor, impacting lineage specification and genes implicated in brain development. To determine differential expression beyond methylation differences, we performed bulk transcriptome sequencing of cerebellar tissue of the ADNP autopsy. As RNA is much less stable over time, we first performed an extensive quality control by evaluating total RNA purity and integrity (see experimental methods). Using bulk mRNA sequencing, we determined the gene ratio (patient/control) using the NOISeq algorithm, a non-parametric method for comparing samples without biological replicates, reporting the log2-ratio of the two conditions (M) and the value of the difference between the conditions (D) . We tested for differential expression across all 7659 genes that appeared in our data set (Additional file : Data S3). In line with the observation of an excess of hypomethylated CpG probes, we observed an excess of upregulated genes. Using a significance cut-off equivalent to, p value < 0.05, FDR = < 0.05, and a biologically meaningful (M-value) log2FC > 0.5, we found 514 downregulated and 1520 upregulated genes with differential expression (Fig. A). Gene expression alterations in the ADNP cerebellum were notable with the majority of genes presenting with an M-value < 5. Gene ontology (GO) enrichment revealed downregulation of glutamatergic synaptic transmission, abnormal cardiac muscle cell conductivity, and nervous system development, whereas cytoskeleton dynamics were upregulated. A remarked enrichment of immune system-related responses was observed that are potentially related to the patient’s immunosuppressant treatment (Fig. B). We confirmed a selected set of genes with RT-PCR, including the RNA-methylation gene METTL3 ( p = 0.005; **), autophagy inducer BECN1 ( p < 0.0001; ****), and WNT signaling ligand CTNNB1 ( p = 0.001; **) (Fig. C). To better interpret the differential expression in the ADNP brain, we compared the transcriptome analysis of the autopsy with the differential expression observed in immortalized LCLs of multiple patients with different ADNP mutations. We tested for differential expression across approximately 10,000 protein-coding transcripts that appeared in our data set (Additional file : Data S4). Using the exact cut-off criteria as in the autopsy, we found 1730 downregulated and 3278 upregulated genes with differential expression, indicating that the ADNP mutations rather induce gene upregulation (Fig. D). Fast Gene Set Enrichment Analysis (fgsea) identified similar molecular pathways as identified in the autopsy (Fig. E). We confirmed a subset of five genes with RT-PCR, including the heterochromatin marker and ADNP-interacting gene CBX3 ( p = 0.01; *), WNT signaling member WNT10A ( p = 0.003; **), actin-cadherin mediator CTNNAL1 ( p = 0.003; **) as well as nonsense mediated decay members SMG5 (p = 0.0002; ***) and UPF3B ( p = 0.005; **) (Fig. F). To investigate the potential impact of the ADNP mutation in the human brain, we intersected the DEGs identified in both data sets (Additional file : Data S5), which revealed an overlap of 241 genes between the ADNP autopsy brain and LCLs (Fig. A). We observed a striking resemblance for biological relevance of genes involved in endoderm specification IGFBP2 (brain, * p = 0.03; LCL, * p = 0.04), canonical WNT signaling WNT2 (brain, * p = 0.01; LCL, ** p = 0.01), mitochondrial transporter SLC25A25 (brain, * p = 0.02; LCL, * p = 0.03), autophagy regulation RUBCN (brain, **** p < 0.0001; LCL, * p = 0.003), hematopoietic stem cell differentiation RUNX1 (brain, ** p = 0.001; LCL, *** p = 0.001), N 6 -adenosine-methylation METTL3 (brain, ** p = 0.005; LCL, **** p < 0.0001), and bone and teeth development BMP6 (brain, ** p = 0.002; LCL, * p = 0.04) (Fig. B, C). In conclusion, these robust gene expression changes related to the nervous system and morphogenesis underline a regulating role of ADNP in the human brain and blood of patients, confirmed by salient pathways including the WNT signaling, autophagy, and bone development together with involvement in processes such as hematopoietic stem cell differentiation and unexpected RNA methylation. As post-transcriptional regulation can further increase variation in gene expression levels , proteome analysis was performed by label-free quantitation (LFQ) mass spectrometry on the cerebellum to study the effect of the c.1676duplA/p.His559Glnfs*3 ADNP mutation at protein expression level. Chromatographic conditions between different runs were highly reproducible, resulting in a strong correlation between LFQ intensities and technical replicates (Additional file : Data S6). Overall, we detected approximately 1522 protein groups per sample under a 1% false-discovery rate (FDR) with fixed modifications of carbamidomethylation (C), deamidation (QN) and oxidation (M). Moreover, we identified 4552 proteins with more than two unique peptides, respectively 988 proteins with at least two unique peptides, and 1477 with one unique peptide. Next, we used MetaboAnalyst 5.0 to quantify differences detected in patient versus control cerebellum. Among the 2455 quality-filtered proteins, we detected 492 proteins with a differential expression (Additional file : Data S7), of which 224 proteins were significantly downregulated, while 268 proteins showed a significant upregulation in the post-mortem patient cerebellum (two-tailed student T-test; padj < 0.05). Next, we plotted the top 10 downregulated (represented in blue) and upregulated (represented in red) proteins identified in patient versus control brain, showing a clear upregulation of the major ADNP-interacting protein heterochromatin Protein 1 homolog beta (CBX1/HP1β), amongst others, indicating that ADNP is able to somehow affect the expression of its direct interaction partner (Fig. A). Subsequently, we applied immunoblot experiments to confirm the downregulation of β-catenin and BECN1 protein levels in the patient brain, in line with its decreased transcription levels. Surprisingly, we also observed differential expression of an additional autophagy marker, MAP1LC3A, in the ADNP brain consistent with aberrant autophagy defects in our transcriptome data (Fig. B). Clustering of the differentially expressed proteins (DEPs) in canonical pathways using IPA indicated a decreased activity of mitochondrial oxidative phosphorylation, sirtuin signaling and RhoA signaling. In contrast, IPA predicted an increase in EIF2 signaling, spliceosomal cycle and protein kinase A signaling in the patient. We also observed an enrichment of pathways with no difference in activity, including granzyme A signaling and mTOR signaling, T-helper signaling, and apoptosis (Fig. C). Next, we mapped all DEPs in a functional enrichment analysis and predicted possible protein–protein interactions of ADNP with the identified DEPs as well as with other biologically correlated proteins. Of particular interest, the histone deacetylase sirtuin 1 (SIRT1) in the center of the protein network was found to link various chromatin modifier proteins such as MECP2, ADNP, SMARCC2, HDAC2 including YY1, and chromatin-associating proteins such as CBX1/3, histones H1F0 and H1.2 to autophagy regulators like MAP1LC3A and LAMP1 (Fig. D). In this section, we showed that the proteomic landscape of ADNP brain autopsy material corroborates our transcriptome findings, e.g., upregulation of ADNP-interactor CBX1/HP1β together with a downregulation of β-catenin and BECN1, supported by abnormalities attributed to the WNT signaling pathway and autophagy. Recently, various studies identified an association between mitochondrial dysfunction, autophagy regulation, and autism spectrum disorders . Similarly, our proteomic protein–protein interaction study mapped SIRT1 at the crossroads of chromatin remodelers and autophagic regulators in the ADNP autopsy brain. Besides, SIRT has been discovered to maintain genomic stability , to enhance synaptic plasticity , to suppress inflammation , to fulfill a neuroprotective function , and to positively regulate autophagy and mitochondrial function . In addition, SIRT1 is known to modulate chromatin structure by activating BRG1, which is a chromatin remodeling interaction partner of ADNP in the SWI/SNF complex . Hence, we reasoned that ADNP and SIRT1 may share common regulatory partners in chromatin remodeling and microtube dynamics that regulate autophagy. To further validate a direct protein interaction of ADNP and SIRT1 in the human brain, co-immunoprecipitation (co-IP) experiments were performed. However, due to the instability of the ADNP protein, co-IP experiments were not successful. Therefore, we alternatively demonstrated the subcellular localization of Adnp and Sirt1 by immunostaining the cerebellum dissected from male C57BL/6JCr wild-type mice as a model for the human condition. Here, cerebellar cryosections were immunostained with primary monoclonal ADNP and SIRT1 antibodies (Cy3 red fluorescent signal), and nuclei were counterstained with DAPI (blue). Adnp expression was predominantly detected in the nucleus, with occasional weak cytoplasmic signals, visualized by the overlap of the red Adnp signal and blue DAPI counterstaining. In contrast, Sirt1 was predominantly situated in the cytoplasm of Purkinje cells in the cerebellum, with occasional nuclear immunoreactivity (Fig. A). Indeed, an indirect interaction of ADNP and SIRT1 was shown in SH-SY5Y cells which could not be validated in human induced pluripotent stem cells (hIPSC)-derived neuronal differentiated cells . Therefore, we next investigated the potential indirect interaction of Adnp and Sirt1 through the EB1/EB3 proteins in murine cerebellar brain lysates with a co-immunoprecipitation assay. During this process, we performed stringent washing steps using high detergent buffers to prevent false-positive binding. In addition, we also controlled each western blot with GAPDH, whose intensity was absent after co-immunoprecipitation of the bait protein. We observed specific co-immunoprecipitation of Adnp (150 kDa) and Sirt1 (100 kDa) in the presence of both EB1 (30 kDa) and EB3 (32 kDa) antibodies. IgG non-reactive beads were used as a negative control, showing no immunoreactivity of Adnp, Sirt1 together with EB1 and EB3 in the eluted fraction (Fig. B). To better understand the physical connections between Adnp (UniProt; Q9Z103) and Sirt1 (UniProt; Q53Z05), we applied a eukaryotic linear motif (ELM) analysis to unravel shared motifs (Fig. C). Interestingly, as partially shown before, we identified a series of common interaction motifs, including (1) SxIP motif for Adnp (aa 354–360, NAPVSIP, p = 0.01) and similar SSIP for Sirt1 (aa 440–448, VALIPSSIP, p = 0.0002), (2) SH3-domains for Adnp (i.e., aa189–195, FQHVAAP, p = 0.01) and Sirt1 (i.e., aa 506–511, PPRPQK, p = 0.001), and (3) 14–3–3 motifs for Adnp (i.e., aa 16–20, RKTVK, p = 0.004) and Sirt1 (i.e., aa 333–342, RNYTQNIDTL, p = 0.004). The presence of a SxIP motif is a unique feature ascribed to both ADNP and SIRT1, as only 42 protein have been identified by mass-spectrometry based methods to contain this conserved motif . Original studies have identified the microtubule-end binding proteins EB1 and EB3 as interaction partners of ADNP through its SxIP motif . Along the same line, we predicted a physical interaction between Adnp, Sirt1 and EB1/EB3 in silico via 3D-molecular docking. Upon ranking the models according to the amount of interacting amino acids, the top 10 docking interactions were derived from ClusPro and processed in ChimeraX. Molecular docking revealed possible Adnp (blue) binding to both microtubule-end binding proteins EB1 (left, violet) and EB3 (right, pink) via amino acids 358–360, corresponding to its SxIP motif. In addition, Sirt1 was predicted to interact with EB1 (left, violet) and EB3 (right, pink) proteins through its similar SSIP motif at amino acid position 446–448 (Fig. D). In conclusion, our findings suggest that Adnp and Sirt1 might indirectly co-immunoprecipitate in the presence of the EB1/EB3 proteins via the SxIP motif for ADNP, respectively SSIP for SIRT1. RNA sequencing data of the ADNP brain autopsy was subsequently analyzed for the enrichment of specialized pathways associated with autophagy , neuroprotection , and mitochondrial biogenesis with a costomized gene toolbox . By this approach, we found enrichment of genes involved in mitophagy (Additional file : Table S5), an autophagy-dependent process regulating mitochondrial homeostasis . Similarly, RNA sequencing data of the ADNP patient LCLs also revealed enrichment of mitophagy-related genes e.g., ubiquitin Specific Peptidase 15 ( USP15 ), mitochondrial distribution and morphology regulator 1 ( MSTO1 ), mitochondrial fission regulator 2 ( MTFR2 ), apoptosis inducing factor mitochondria associated 1 ( AIFM1 ), Mitofusin 2 ( MFN2 ), Beclin 1 ( BECN1 ), and mitochondrial Elongation Factor 1 ( MIEF1 ) (Fig. A). RNAseq results of nine DEGs were further validated by RT-PCR, including MFN2 ( p < 0.0001; ****), MAPK1 ( p = 0.01; *), BECN1 ( p < 0.006; **), MCL1 ( p = 0.001; **), USP15 ( p = 0.0002; ***), USP8 ( p = 0.02; *), TBK1 ( p = 0.03; *), UBE2N ( p = 0.004; **), and MTFR2 ( p = 0.002; **) (Fig. B). Since we obtained two reciprocally regulated mitophagy gene clusters in the human brain (Additional file : Data S8), we next determined the autophagic flux by Bafilomycin A1 treatment in LCLs derived from ADNP patients and sex-age-matched control subjects followed by western blot detection of p62 and LC3 (Additional file : Data S9). Prior to Bafilomycin A1 treatment, we detected a non-significant increase in levels of p62/SQSTM1 ( p = 0.38; ns) and LC3 expression ( p > 0.99; ns) in patient-derived LCLs. Following Bafilomycin A1 treatment, the expression of p62 ( p = 0.18; ns) increased in the ADNP deficient cell lines, although non-significant. However, the levels of LC3 ( p < 0.0001; ****) showed a remarkable increase in the ADNP-patient cell lines, confirming an increased autophagic flux. Given the mitophagy gene signature in LCLs from ADNP patients together with downregulated mitochondrial protein functions identified by LFQ-MS in the human ADNP autopsy brain, we subsequently investigated relative changes in mitochondrial activity and subcellular localization in patient-derived ADNP fibroblasts compared to two unaffected controls using the MitoTracker® Red CM-H2XRos probes. Here, we observed a rather faint fluorescence intensity in the ADNP patient fibroblasts compared to the controls. Furthermore, we did not observed a difference in the subcellular localization of the mitochondria in patient fibroblasts as compared to the control subjects (Fig. C). Relative MitoTracker® Red CM-H2XRos probe fluorescence per cell, indicative of mitochondrial redox activity, was quantified in patient and control fibroblasts using the Tecan Spark™ and normalized to the brightfield cell count. The mitochondria of patient-derived skin fibroblasts showed a remarkable decrease ( p = 0.01; *) in fluorescent intensity compared to the control cells, indicating aberrant mitochondrial activity (Fig. D). To rule out the possible decrease in the mitochondrial copy number in ADNP LCLs and skin fibroblasts, we determined the mtDNA/nDNA ratio (tRNAleu/B2M) by RT-PCR. Since we could not detect a significant difference in the number of mitochondria in ADNP cell lines compared to controls (Additional file : Data S10), we next investigated mitochondrial respiration using the Seahorse analyzer and further addressed the observed mitochondrial dysfunction in patient-derived (blue) and control (red) fibroblasts using the Cell Mito Stress Test to measure changes in oxygen consumption rate (OCR) before/after administration of specific compounds that sequentially affect the different complexes of the mitochondrial respiratory chain. The changes in OCR allowed quantification of several aspects of the mitochondrial respiration. Measurements of the basal respiration showed a significant decrease ( p = 0.04; *) in the patient fibroblasts compared to the control lines, confirming a reduced activity measured in the fluorescent mitochondrial staining. Besides, we also observed decreased values of proton leak ( p = 0.21; ns), ATP-linked respiration ( p = 0.06; ns) and maximal respiratory capacity ( p = 0.28; ns) in the ADNP cell lines, although the difference was not significant. Lastly, we observed no difference in the spare respiratory capacity ( p = 0.84; ns) and non-mitochondrial respiration ( p = 0.79; ns) in patient or control lines (Fig. E, F). In this study, we investigated the cerebellum of a unique six-year-old male child heterozygous for the ADNP de novo mutation c.1676dupA/p.His559Glnfs*3 who died of multiple organ failure after a second liver transplant to unravel functional biochemical consequences of the mutation in the human brain. ADNP gene mutations typically result in a syndromic form of autism, co-morbid with ID, termed Helsmoortel–Van der Aa syndrome . The unique autopsy case analyzed in this report presented with moderate ID, whereas developmental milestones such as sitting up and walking independently, language and bladder training were delayed as reported for almost all children of the Helsmoortel–Van der Aa syndrome. With 83.3% of all patients presenting with gastrointestinal problems, our autopsy case presented with parent-reported feeding and eating problems. He also presented with severe autism typified by avoidant behaviors towards other children as reported in almost 93% of patients. To a lesser extent, the child showed generalized symptomatic epilepsy, which was only reported at low frequency in the population (16.2%). Phenotypically, he presented with key features of the syndrome indicated by a prominent forehead, wide nasal bridge, large mouth, widely spaced teeth, and malformed ears as observed for more than the majority of patients. Symptoms were also correlated to urogenital problems (28%), sleep problems (65.2%), and early teething (71.1%) . Clinically, our autopsy patient is thus a bona fide representative of the Helsmoortel–Van der Aa syndrome patient group. While we have been unable to detect (mutant) ADNP by Western blotting in the autopsy, we do show here a human cerebellar DNA methylation pattern consistent with methylation patterns found in the blood of patients and a transcriptomic profile that bears a significant overlap with the transcriptomic profile of patient cell lines. Proteome analysis of the post-mortem cerebellum also pointed to pathways that have been implicated in the Helsmoortel–Van der Aa syndrome and lead to the identification of ADNP regulatory functions of mitochondria in the human brain. We thus conclude that the ADNP post-mortem cerebellum is valid to study the disorder and further enabled us to confirm earlier observations made in cellular systems and in disease-relevant brain tissue to unravel novel pathways. The ADNP brain-specific episignature ADNP-specific defects in chromatin remodeling translate into genome-wide changes in DNA methylation, including differential methylation of various genes involved in cytoskeletal functions, synaptic transmission, nervous system development , calcium-binding , and WNT signaling , in part resembling a Class I type episignature of Helsmoortel–Van der Aa syndrome . Brain-specific MAGI2 and CTNND2 hypomethylation, similar to hypomethylation detected in peripheral blood of ADNP patients, was confirmed by targeted bisulfite pyrosequencing and illustrates partially conserved ADNP-specific episignatures across brain-blood cell types. Interestingly, CTNND2 dysregulation has been reported in several cases of autism and intellectually disabled patients presenting with behavioral problems and dysmorphic features , involving changes in autophagy signaling and arborization of the developing dendrites . Interestingly, our transcription factor (TF) motif enrichment analysis further identified ADNP as the main transcription factor controlling the hypomethylated gene cluster, confirming its central regulatory function during brain development . The ADNP mutation affects lineage specification ADNP regulates genes participating in embryonic development such as Pax6, Olig2, Sox1, Nestin, and Otx2 . Our TF analysis showed enrichment of these lineage specifying genes in the autopsy brain. Pax6 is important for neuronal development , especially the eye, which could potentially be related to the visual problems observed in our patient cohort (73.6%) . Olig2 is involved in cell fate of ventral neuroectodermal progenitors and differential expression impaired interneuron differentiation, causing cognitive impairments . Sox1 and its isoforms regulate embryonic development, male sex determination, and cell fate decisions by acting as an inhibitor of neural differentiation . Nestin encodes a cytoskeletal protein that is expressed in nerve cells with disturbances causing deficits in motor coordination , a hallmark of the Helsmoortel–Van der Aa syndrome . The transcription factor Otx2 is involved in the differentiation and proliferation of neuronal progenitor cells and as such affecting brain development, craniofacial and sensory organs, and synaptic plasticity , which all have been reported to be dysregulated in an Adnp heterozygous knockout and CRISPR/Cas9 mouse models . Interestingly, OTX2 hypermethylation was also confirmed by bisulfite pyrosequencing in the ADNP autopsy brain, correlating with abnormal synaptic plasticity, which has previously been demonstrated by immunohistochemical stainings of PSD95 and NMDAR1 in the hippocampal hillus and dentate gyrus of this patient . Taken together, a mutation in the ADNP gene affects brain methylation and expression of genes involved in brain development, neuronal plasticity, and lineage specification. The ADNP mutation affects WNT signaling Consistent with the ADNP-specific episignature pathway enrichment outcome, brain transcriptome changes affected similar pathways, involving downregulation of the WNT signaling pathway , glutamatergic synaptic transmission , cardiac muscle functioning and nervous system development . More particularly, we observed a decrease of β-catenin levels, the major transcriptional driver of the WNT signaling pathway, resulting in the downregulation of neuroectoderm developmental genes and defective neurogenesis . We also found molecular indications for the downregulation of WNT signaling member 10A (WNT10A), a gene essential for tooth morphogenesis . Interestingly, 71% of children present with premature primary tooth eruptions in the Helsmoortel–Van der Aa syndrome . These results were confirmed using bulk mRNA sequencing results of LCLs obtained from several ADNP children showing decreased WNT signaling , together with pathways such as Notch and Hedgehog signaling affecting embryogenesis and morphogenesis. Another downregulated gene is the RNA-methylating enzyme METTL3 , previously reported to be regulated by the β-catenin/WNT signaling pathway in an autism mouse model , as well as by the FMR1 gene, causative of the autistic Fragile X syndrome . ADNP plays a suggested role in autophagy and aging We also observed abnormalities in the autophagy pathway, affecting brain homeostasis , via downregulation of the autophagy inducer BECN1 in both brain patient brain tissue and brain samples of Adnp deficient mice as well as in post-mortem schizophrenia brains . Furthermore, we also demonstrated elevated LC3 levels in the autopsy brain and LCLs after Bafilomycin A treatment. The reduction in BECN1 levels with the spontaneous increase in LC3 protein levels might reflect a compensatory mechanism . In addition, ADNP was also shown to bind LC3 directly in a human neuroblastoma cell line, and its association was increased in the presence of the NAP (Davunetide) octapeptide . Furthermore, we observed expression changes in members of the nonsense-mediated decay (NMD) pathway, which trigger the autophagy process . In fact, changes in mRNA levels of the NMD-members SMG5 and UPF3B have been associated with intellectual syndromic and non-syndromic intellectual disability, autism, childhood onset schizophrenia and ADHD . Interestingly, the Slc12a2 and Slc9a3 family members were shown to be regulated in an age-dependent manner in the hippocampus, cortex, and spleen of Adnp haploinsufficient mice , and early-onset hippocampal tauopathy, a marker for aging and neurodegeneration, was reported in this young subject by an independent study . RUBCN is a negative regulator of autophagy which also correlates with aging . HP1beta involvement Intriguingly, we also detected a significant increase in protein levels of the repressive chromatin ADNP-interactor HP1β in our proteomics experiment exclusively, which is consistent with as yet unpublished observations in the brain of our novel Adnp frameshift mouse model , again highlighting the resemblance between murine and human data and the effects that mutations in ADNP have on the expression of its direct interaction partners . Novel ADNP interactions in the brain: immune signature, the cytoskeleton and mitochondrial involvement In our study, pathways involving cytoskeleton dynamics , T-helper cell differentiation and immune-associated pathways were found to be transcriptionally upregulated. However, it cannot be excluded that the observed immune system malfunctions could be related to the reported multiple organ failure in the patient. Moreover, a substantial part of Helsmoortel–Van der Aa patients presents with (pharmacologically treated) thyroid hormone problems or epilepsy, which may also affect inflammation-related processes . Upon further downstream protein expression profiling of the ADNP patient versus healthy cerebellum proteome by mass spectrometry analysis, differentially expressed proteins showed enrichment for mitochondrial dysfunction. Mitochondrial dysfunction has been linked to the onset of autism and epilepsy by multiple studies, both key features present in the clinical presentation of the current case. Besides, we also observed significant downregulation of cytoskeletal functions and sirtuin signaling. Of special note, ADNP associates with cytoskeletal microtubules through its SxIP motif and the microtubule end binding proteins 1 and 3 (EB1/EB3) in differentiated neurons with microtubule deficits of human ADNP mutants in cell cultures (e.g., p.Ser404*, p.Tyr719*, and p.Arg730*) . Interestingly, sirtuins (SIRTs) are vital NAD + -dependent deacetylase enzymes that regulate autophagy , mitophagy , aging , and cytoskeletal (microtubule) functions via dynamic changes in protein acetylation . Similar to ADNP, SIRT1 regulates autophagy and mitochondrial functions , maintains genomic stability , enhances synaptic plasticity , suppresses inflammation , regulates cellular aging and promotes neuroprotection . Only recently, ADNP-SIRT1 interactions could be detected via WRD5 in a human neuroblastoma cell line , which could now be further validated in murine cerebellum modeling the died ADNP patient. Besides, we showed biochemical evidence for indirect binding of ADNP and SIRT1 via the microtubule end binding proteins (EB1/EB3), further linking autophagy, mitophagy, and cytoskeletal (microtubule) brain functions. Similarly, in other neurodevelopmental disorders such as Koolen-De Vries syndrome, dysfunctional autophagy was found to cause synaptic deficits in human IPSC cultures . Our findings were further supported by gene set enrichment analysis of cellular stress responses and mitochondrial activity-based assays which reveal disrupted mitochondrial gene expression via autophagy (mitophagy) processes and decreased mitochondrial activity in patient-derived fibroblasts and LCLs with an ADNP mutation. Interestingly, the ADNP-derived drug NAP (Davunetide) was shown to improve microtubule-dependent traffic, restore the autophagic flux and potentiate autophagosome-lysosome fusion, leading to autophagic vacuole clearance in Parkinson’s disease cells . Finally, protein–protein interaction network analysis of our differential proteome analysis predicted SIRT1 in a central hub which links chromatin remodelers (e.g., ADNP, SMARCC2, HDAC2 and YY1) with autophagy signaling (BECN1, LAMP1 and LC3). Various chromatin remodeling proteins of the network have previously been ascribed as part of the ADNP-WDR5-SIRT1-BRG1-HDAC2-YY1 chromatin complex, which in part shares co-expression via transcriptional regulation . In conclusion, our integrative multi-omics study for the first time has confirmed various ADNP mutant associated neurodevelopmental affected pathways at the epigenome-transcriptome-proteome level in primary brain tissue of an ADNP child, which previously have been described in in vitro LCL cultures and in vivo animal experiments substantiating strong cross-species and cross-cell type of molecular ADNP (disease) features. Moreover, our results hint towards a new functional link between the chromatin remodeler ADNP and the NAD + -deacetylase SIRT1 to control cytoskeletal and mitochondrial autophagy stress responses in neurodevelopment and plasticity. This novel molecular mechanism holds promise for new therapeutic strategies aimed at restoring mitochondrial (dys)function(s) in the Helsmoortel–Van der Aa syndrome. ADNP-specific defects in chromatin remodeling translate into genome-wide changes in DNA methylation, including differential methylation of various genes involved in cytoskeletal functions, synaptic transmission, nervous system development , calcium-binding , and WNT signaling , in part resembling a Class I type episignature of Helsmoortel–Van der Aa syndrome . Brain-specific MAGI2 and CTNND2 hypomethylation, similar to hypomethylation detected in peripheral blood of ADNP patients, was confirmed by targeted bisulfite pyrosequencing and illustrates partially conserved ADNP-specific episignatures across brain-blood cell types. Interestingly, CTNND2 dysregulation has been reported in several cases of autism and intellectually disabled patients presenting with behavioral problems and dysmorphic features , involving changes in autophagy signaling and arborization of the developing dendrites . Interestingly, our transcription factor (TF) motif enrichment analysis further identified ADNP as the main transcription factor controlling the hypomethylated gene cluster, confirming its central regulatory function during brain development . ADNP mutation affects lineage specification ADNP regulates genes participating in embryonic development such as Pax6, Olig2, Sox1, Nestin, and Otx2 . Our TF analysis showed enrichment of these lineage specifying genes in the autopsy brain. Pax6 is important for neuronal development , especially the eye, which could potentially be related to the visual problems observed in our patient cohort (73.6%) . Olig2 is involved in cell fate of ventral neuroectodermal progenitors and differential expression impaired interneuron differentiation, causing cognitive impairments . Sox1 and its isoforms regulate embryonic development, male sex determination, and cell fate decisions by acting as an inhibitor of neural differentiation . Nestin encodes a cytoskeletal protein that is expressed in nerve cells with disturbances causing deficits in motor coordination , a hallmark of the Helsmoortel–Van der Aa syndrome . The transcription factor Otx2 is involved in the differentiation and proliferation of neuronal progenitor cells and as such affecting brain development, craniofacial and sensory organs, and synaptic plasticity , which all have been reported to be dysregulated in an Adnp heterozygous knockout and CRISPR/Cas9 mouse models . Interestingly, OTX2 hypermethylation was also confirmed by bisulfite pyrosequencing in the ADNP autopsy brain, correlating with abnormal synaptic plasticity, which has previously been demonstrated by immunohistochemical stainings of PSD95 and NMDAR1 in the hippocampal hillus and dentate gyrus of this patient . Taken together, a mutation in the ADNP gene affects brain methylation and expression of genes involved in brain development, neuronal plasticity, and lineage specification. ADNP mutation affects WNT signaling Consistent with the ADNP-specific episignature pathway enrichment outcome, brain transcriptome changes affected similar pathways, involving downregulation of the WNT signaling pathway , glutamatergic synaptic transmission , cardiac muscle functioning and nervous system development . More particularly, we observed a decrease of β-catenin levels, the major transcriptional driver of the WNT signaling pathway, resulting in the downregulation of neuroectoderm developmental genes and defective neurogenesis . We also found molecular indications for the downregulation of WNT signaling member 10A (WNT10A), a gene essential for tooth morphogenesis . Interestingly, 71% of children present with premature primary tooth eruptions in the Helsmoortel–Van der Aa syndrome . These results were confirmed using bulk mRNA sequencing results of LCLs obtained from several ADNP children showing decreased WNT signaling , together with pathways such as Notch and Hedgehog signaling affecting embryogenesis and morphogenesis. Another downregulated gene is the RNA-methylating enzyme METTL3 , previously reported to be regulated by the β-catenin/WNT signaling pathway in an autism mouse model , as well as by the FMR1 gene, causative of the autistic Fragile X syndrome . We also observed abnormalities in the autophagy pathway, affecting brain homeostasis , via downregulation of the autophagy inducer BECN1 in both brain patient brain tissue and brain samples of Adnp deficient mice as well as in post-mortem schizophrenia brains . Furthermore, we also demonstrated elevated LC3 levels in the autopsy brain and LCLs after Bafilomycin A treatment. The reduction in BECN1 levels with the spontaneous increase in LC3 protein levels might reflect a compensatory mechanism . In addition, ADNP was also shown to bind LC3 directly in a human neuroblastoma cell line, and its association was increased in the presence of the NAP (Davunetide) octapeptide . Furthermore, we observed expression changes in members of the nonsense-mediated decay (NMD) pathway, which trigger the autophagy process . In fact, changes in mRNA levels of the NMD-members SMG5 and UPF3B have been associated with intellectual syndromic and non-syndromic intellectual disability, autism, childhood onset schizophrenia and ADHD . Interestingly, the Slc12a2 and Slc9a3 family members were shown to be regulated in an age-dependent manner in the hippocampus, cortex, and spleen of Adnp haploinsufficient mice , and early-onset hippocampal tauopathy, a marker for aging and neurodegeneration, was reported in this young subject by an independent study . RUBCN is a negative regulator of autophagy which also correlates with aging . Intriguingly, we also detected a significant increase in protein levels of the repressive chromatin ADNP-interactor HP1β in our proteomics experiment exclusively, which is consistent with as yet unpublished observations in the brain of our novel Adnp frameshift mouse model , again highlighting the resemblance between murine and human data and the effects that mutations in ADNP have on the expression of its direct interaction partners . In our study, pathways involving cytoskeleton dynamics , T-helper cell differentiation and immune-associated pathways were found to be transcriptionally upregulated. However, it cannot be excluded that the observed immune system malfunctions could be related to the reported multiple organ failure in the patient. Moreover, a substantial part of Helsmoortel–Van der Aa patients presents with (pharmacologically treated) thyroid hormone problems or epilepsy, which may also affect inflammation-related processes . Upon further downstream protein expression profiling of the ADNP patient versus healthy cerebellum proteome by mass spectrometry analysis, differentially expressed proteins showed enrichment for mitochondrial dysfunction. Mitochondrial dysfunction has been linked to the onset of autism and epilepsy by multiple studies, both key features present in the clinical presentation of the current case. Besides, we also observed significant downregulation of cytoskeletal functions and sirtuin signaling. Of special note, ADNP associates with cytoskeletal microtubules through its SxIP motif and the microtubule end binding proteins 1 and 3 (EB1/EB3) in differentiated neurons with microtubule deficits of human ADNP mutants in cell cultures (e.g., p.Ser404*, p.Tyr719*, and p.Arg730*) . Interestingly, sirtuins (SIRTs) are vital NAD + -dependent deacetylase enzymes that regulate autophagy , mitophagy , aging , and cytoskeletal (microtubule) functions via dynamic changes in protein acetylation . Similar to ADNP, SIRT1 regulates autophagy and mitochondrial functions , maintains genomic stability , enhances synaptic plasticity , suppresses inflammation , regulates cellular aging and promotes neuroprotection . Only recently, ADNP-SIRT1 interactions could be detected via WRD5 in a human neuroblastoma cell line , which could now be further validated in murine cerebellum modeling the died ADNP patient. Besides, we showed biochemical evidence for indirect binding of ADNP and SIRT1 via the microtubule end binding proteins (EB1/EB3), further linking autophagy, mitophagy, and cytoskeletal (microtubule) brain functions. Similarly, in other neurodevelopmental disorders such as Koolen-De Vries syndrome, dysfunctional autophagy was found to cause synaptic deficits in human IPSC cultures . Our findings were further supported by gene set enrichment analysis of cellular stress responses and mitochondrial activity-based assays which reveal disrupted mitochondrial gene expression via autophagy (mitophagy) processes and decreased mitochondrial activity in patient-derived fibroblasts and LCLs with an ADNP mutation. Interestingly, the ADNP-derived drug NAP (Davunetide) was shown to improve microtubule-dependent traffic, restore the autophagic flux and potentiate autophagosome-lysosome fusion, leading to autophagic vacuole clearance in Parkinson’s disease cells . Finally, protein–protein interaction network analysis of our differential proteome analysis predicted SIRT1 in a central hub which links chromatin remodelers (e.g., ADNP, SMARCC2, HDAC2 and YY1) with autophagy signaling (BECN1, LAMP1 and LC3). Various chromatin remodeling proteins of the network have previously been ascribed as part of the ADNP-WDR5-SIRT1-BRG1-HDAC2-YY1 chromatin complex, which in part shares co-expression via transcriptional regulation . In conclusion, our integrative multi-omics study for the first time has confirmed various ADNP mutant associated neurodevelopmental affected pathways at the epigenome-transcriptome-proteome level in primary brain tissue of an ADNP child, which previously have been described in in vitro LCL cultures and in vivo animal experiments substantiating strong cross-species and cross-cell type of molecular ADNP (disease) features. Moreover, our results hint towards a new functional link between the chromatin remodeler ADNP and the NAD + -deacetylase SIRT1 to control cytoskeletal and mitochondrial autophagy stress responses in neurodevelopment and plasticity. This novel molecular mechanism holds promise for new therapeutic strategies aimed at restoring mitochondrial (dys)function(s) in the Helsmoortel–Van der Aa syndrome. In this study, we performed a molecular and biochemical autopsy study on the cerebellar tissue of a patient with Helsmoortel–Van der Aa syndrome, who has died after multiple organ failure following a second liver transplantation. We acknowledge several limitations that are either intrinsic to the study design or linked to the experimental course of action. First, we report the only post-mortem brain tissue that is currently available and compare it with an age-matched control subject. We acknowledge a gender difference between the ADNP patient (male) and control subject (female) but have consistently taken this difference into account during all bioinformatic analyses. In addition, we acknowledge that our control subject has a clinical diagnosis of Rett syndrome. However, we sequenced the entire ADNP gene, showing no genetic defect in the specimen. In addition, an expert pathologist examined brain sections using different immunohistochemical techniques, resulting no morphological abnormalities in the cerebellum. Although we are aware of the unique value of this material, we do acknowledge the lack of statistical power due to the limited sample size (n = 1). To overcome the lack of statistical power, we compared our findings of the human autopsy brain to several primary cell lines (e.g., LCLs and skin fibroblasts) derived from patients with different ADNP mutations and looked for gene expression similarities and dissimilarities across these tissues. Here, we observed a similar trend of gene expression changes in the human brain with in vitro and in vivo model systems, indicating a strong conservation of the ADNP gene and functioning. However, we report that this mutation is unique on its own, and no patient has been diagnosed with this ADNP mutation before. Therefore, comparisons to individuals with different ADNP mutations or genetically engineered mouse models cannot completely represent the molecular mechanisms underlying this specific mutation. On the topic of preservation of the autopsy material, we recognize a long post-mortem interval time i.e., the period of death of the patient and the time of the brain biopsy, which was 35-h after which liquid nitrogen was applied. To examine DNA and mRNA preservation in the cerebellum of our autopsy case, we subjected the extracted materials to the most sensitive bioanalyzer technologies to control for proper integrity. In addition, we also implemented very extensive validations e.g., pyrosequencing and RT-PCR for the implemented omics techniques using adequate controls. Similar to mRNA integrity, the preservation of proteins is not homogeneous since some molecules are more vulnerable than others . In particular, we were unable to detect ADNP levels by means of western blotting in the cerebellum, but we did detect the protein in the cerebellum of the control subject. Since ADNP is already an unstable protein , the absence of the protein could potentially be attributed to degradation by active proteases in the brain during the post-mortem interval . Lastly, we noted prominent changes in immune system-related pathways as indicated by our methylation, transcriptome, and proteome analyses of the autopsy brain. Although a subset of patients presented with thyroid problems and recent investigations indicated a role of ADNP in T-helper cell differentiation , we show a critical attitude towards the interpretation of these results, as we cannot rule out certain bias in gene expression alterations following two liver transplantations and the administration of multiple antiepileptic drugs during the lifespan of this child with an ADNP mutation. Additional file 1 : Table S1 . Specifications of the human subjects, lymphoblastoid and fibroblastic cell lines. The table represents the anonymized patient IDs as a fictive number together with the WES-validated mutation in the ADNP gene. RNA purity determined by the 260/280 ratio and RIN integrity score of the RNA samples were also reported. Table S2 . Tested antibody overview. The table indicates the used antibodies for this study together with the manufacture and catalog number, host species, predicted reactivity, peptide sequence and the optimized dilution for each western blot experiment. Table S3 . Pyrosequencing primers. Table contains gene name, forward primer (5’ 3’), reverse primer (5’ 3’), location of the biotin tag of either the forward (Fwd) or reverse (Rev) primer, sequencing primer (5’ 3’), nucleotide sequence for CpG analysis and predication of the EPIC Beadchip array result. Table S4 . RT-PCR primer sequences. The table represent the forward and reverse primer sequences (5’ 3’) for expression analysis of ADNP, brain and lymphoblastoid transcriptome, and mitochondrial gene panel confirmations. Table S5 . Mitophagy-related gene panel using for screening the RNA sequencing data of ADNP lymphoblastoid cell lines (LCLs). Table S6 . Phenotype and clinical features of the post-mortem ADNP patient carrying the c.1676duplA/p.His559Glnfs*3 mutation in comparison with a cohort study by Van Dijck et al. entailing 78 HVDAS individuals. Additional file 2 : Human Infinium EPIC BeadChip Array data of the ADNP cerebellum as compared to an age-matched control subject . CpG probes with an absolute β-value > 0.1 (10% methylation differences) are represented. Additional file 3 : Transcription factor motif analysis with Irregulon . Enriched transcription factor motifs in the ADNP cerebellum using the identified hypo- and hypermethylated genes. Motif enrichment was scored using the normalized enrichment score (NES). Additional file 4 : List of differentially expressed genes in the ADNP cerebellum as compared to an age-matched control subject using the NOIseq non-parametric analysis package. Additional file 5 : List of differentially expressed genes in the ADNP patient-derived lymphoblastoid cell lines as compared to age- and sex-matched control lymphoblastoid cell lines using the DESeq2 analysis package. Additional file 6 : List of overlapping differentially expressed genes in the ADNP cerebellum that overlap with the differentially expressed genes in the ADNP patient-derived lymphoblastoid cell lines. Additional file 7 : Correlation heatmap of the label-free quantification mass spectrometry (LFQ-MS) experiment in the ADNP cerebellum . Pairwise correlations of protein abundances characterizing the relationships between proteins of the global mass spectrometry experiment. Correlations between all proteins, clustering of protein groups and similarly behaving proteins are represented in the protein-protein correlation matrix calculated on the logarithmic intensities. A strong correlation between technical replicates (n = 5) was observed (red color), indicating high reproducibility as tested by the Pearson correlation. Negative correlations are represented in a blue color (see scaled bar legend). Additional file 8 : List of differentially expressed proteins in the ADNP cerebellum (LFQ-MS). Additional file 9 : Mitophagy transcriptomic gene signature in the ADNP cerebellum . Expression levels of mitophagy-related genes in the ADNP patient cerebellum compared to an age-matched control subject. mRNA sequencing demonstrated both an upregulated (red) as well as a downregulated (blue) mitophagy gene signature in the ADNP patient cerebellum as compared to an age-matched control subject. Additional file 10 : Autophagic flux assessment in ADNP patient-derived lymphoblastoid cell lines using Bafilomycin A1 . A . The autophagic flux was determined in ADNP patient-derived and age- and sex-matched control lymphoblastoid cell lines by treatment with 160 nM of Bafilomycin A1 (BAF) for two hours. Protein extracts of untreated and BAF-treated cells were subjected to western blotting using anti-p62/SQSTM1 and anti-LC3 antibodies to assess the autophagic flux. Although p62 expression increased after BAF treatment (+BAF) compared to untreated (-BAF) cells, the difference was not significant in patients (PAT) and controls (CTR). However, the expression of LC3 significantly increased after BAF treatment and was significantly increased in PAT versus CTR post-treatment compared to untreated patient cells (PAT-BAF). All western blots were controlled by GAPDH to ensure equal loading. Image quantification was performed using ImageJ software. B . Graphical representation was performed in GraphPad Prism version 9.3.1 using a 2-way ANOVA with Sidak’s multiple comparisons test to assess the interaction of the genotype (PAT versus CTR) and treatment (-BAF versus +BAF). Additional file 11 : Determination of the mitochondrial DNA copy number (mtDNA-CN) in ADNP patient-derived lymphoblastoid cells and fibroblastic cell lines. Additional file 12 : Raw western blotting images. |
In-vitro comparison of fracture resistance of CAD/CAM porcelain restorations for endodontically treated molars | d434d22a-9931-42f0-9975-bc91615d4bda | 11456252 | Dentistry[mh] | Root canal treatment is a critical dental procedure that has seen substantial improvement with advances in canal shaping, filling techniques, and materials. Following successful endodontic therapy, the preparation and restoration of the access cavity are crucial operations that significantly influence the overall success of the treatment. Properly restoring the access cavity ensures that the tooth remains functional and structurally sound, which is essential for its longevity and effectiveness . The primary clinical concern post-restoration is the risk of tooth fracture, which often results from the failure of coronal restorations . The loss of endodontically treated teeth is frequently linked to these failures, highlighting the importance of access cavity restoration and coronal restorations that protect the root canal system, support dental tissues, and restore oral functionality . Various materials can be used to restore endodontically treated teeth, each with its own set of advantages and limitations. Common materials include composite resins, ceramics, metals, and amalgams. Composite resins are favored for their aesthetic properties and ease of manipulation, but they may lack the necessary strength for heavily loaded areas . Ceramics, on the other hand, offer superior aesthetic and mechanical properties but require more extensive preparation and can be more technique-sensitive . Metals, including gold and base metal alloys, provide excellent durability and strength but may not be aesthetically pleasing. Amalgam, though durable and less costly, has seen a decline in use due to aesthetic concerns and the advent of more advanced materials . Post-core restorations are essential for teeth with considerable material loss following endodontic treatment, providing support and retention for the tooth’s function, phonation, and aesthetic purposes . The materials’ surface characteristics and hardness are crucial to the mechanical behavior of the restored teeth. Nevertheless, using posts carries risks such as root perforation and reduction of root dentin, leading some researchers to recommend against their use. Other restorative options, like amalgam or coronal-radicular restorations, have also yielded successful outcomes in clinical and laboratory settings . The less-documented endocrown technique, conceived as a monoblock structure combining the core and full-contour crown, was first described by Pissis in 1995 and further developed and popularized by Bindl and Mörmann in 1999 . Endocrowns rely on the pulp chamber walls for macromechanical retention and bonding for micromechanical retention. They are particularly suitable for teeth with substantial material loss, where conventional restorations are not viable due to limited space or insufficient ceramic thickness . The shift toward minimally invasive procedures and advancements in adhesive dentistry has prompted the development of new restoration methods. Consequently, there is a growing preference for ceramic inlays, onlays, and endocrowns over traditional post-core and full-contour crown restorations, especially for molars with extensive crown damage following the endodontic treatment. These restoration methods emphasize the preservation of tooth structure and enhance the longevity of the treated teeth . Additionally, integrating CAD/CAM technology in dental practices has revolutionized chairside design and manufacturing. CAD/CAM systems enable precise and efficient fabrication of indirect restorations, such as ceramic inlays, onlays, and crowns, making them an essential tool for dental professionals. This technology improves the accuracy and fit of restorations and streamlines the workflow in dental practices, reducing the time and effort required for restorative procedures . The study evaluates the fracture strength and patterns of various restorative constructs post-core and full-contour crowns, composite resin core and full-contour crowns, and endocrown. These restorations are fabricated explicitly from feldspathic porcelain using CAD/CAM technology for lower first molars that have undergone extensive crown destruction consequent to root canal therapy. It is hypothesized that restorations made using CAD/CAM technology with feldspathic porcelain will exhibit higher fracture resistance and distinct fracture patterns compared to traditional methods, with significant differences observed among the different restorative constructs. The null hypothesis of this study is that there is no significant difference in the fracture resistance among the different types of CAD/CAM porcelain restorations used for endodontically treated molars.
The university’s Clinical Research Ethics Committee approved the study protocol ( B.30.2.ODM.0.20.08/790 ). The manuscript of this laboratory study has been written according to Preferred Reporting Items for Laboratory Studies in Endodontology (PRILE) 2021 guidelines. Preparation of specimens This study utilized eighty permanent lower first and second molars extracted due to periodontal disease, ensuring they were free of caries, fractures, previous restorations, and had separated roots. Initially, these teeth were rinsed under tap water and cleaned using an ultrasonic scaler to remove hard and soft tissue debris. Teeth were selected based on their similar morphology, confirmed by measuring crown-root lengths, mesial-distal, and bucco-lingual widths using a digital caliper (Digital Caliper, CEN-TECH, Virginia, USA). Afterward, the teeth were stored in a thymol solution for the first 24 h and in distilled water at room temperature until experimentation. Inspection and distribution of specimens The selected teeth were inspected for residual tissue or defects using a stereomicroscope (Leica EZ4 D, Leica Microsystems, Wetzlar, Germany) at 20x magnification. This ensured that all specimens met the inclusion criteria and were suitable for further procedures. To ensure a balanced distribution of the specimens into the test groups, the teeth were categorized based on their crown-root lengths, mesial-distal (MD) diameters, and bucco-lingual (BL) diameters. Teeth with similar dimensions were grouped to minimize variability. The distribution process involved the following steps: Measurement and Categorization : Each tooth’s crown-root length, MD diameter, and BL diameter were measured using a digital caliper. Teeth were then categorized into subgroups based on these measurements to ensure uniformity within each test group. Randomized Assignment : Within each subgroup, teeth were randomly assigned to one of the four test groups (Post-Core, Core- full-contour crown, Endocrown, and Control) to ensure an even distribution of specimen characteristics across all groups. Root canal treatment and cavity preparation The crowns of the teeth were removed at the beginning. This was done by cutting 1 mm above the enamel-cementum junction, parallel to the occlusal surface, using a low-speed linear precision saw (Isomet 5000 Linear Precision Saw; Buehler, Illinois, USA) with water cooling. Following this, endodontic access cavities were created using a diamond fissure bur under water cooling. The pulp tissue was then extracted. Working lengths were determined using a #15 K-type root canal file (Dentsply Maillefer, Ballaigues, Switzerland) through a clockwise motion to the apical part of the root. Root canal shaping utilized the crown-down technique with rotary nickel-titanium files (ProTaper Next; Dentsply Maillefer, Ballaigues, Switzerland), finishing with the last X2 (#25.06) file for mesial root canals and X3 (#30.07) file for distal root canals as per manufacturer guidelines. Root canals were then obturated using AH Plus (Dentsply; De Trey Konstanz, Germany) paste and the lateral compaction technique with main and auxiliary gutta-percha cones (Dentsply Maillefer; Ballaigues, Switzerland and Diadent #25.02; Diadent Group International; Chongchong Buk Do, South Korea), corresponding to the final file used. Excess coronal gutta-percha was removed using a hot hand instrument. Blunt-tipped tapered diamond burs (Piranha Diamond, SS White, NJ, USA) were used for cavity preparation, aligning with the axial walls and an 8⁰-10⁰ taper angle towards the occlusal plane. The coronal outer wall thickness was maintained at a minimum of 2 mm. Standardization was ensured by measuring all access cavity wall thicknesses with a periodontal probe and digital caliper. The access cavity walls were then smoothed to remove sharp corners and edges (as illustrated in Fig. 3.3). After preparation, cavities were sealed with temporary restorative material (Cavit G; 3 M Espe, Seefeld, Germany) and stored in distilled water at room temperature until further processing. Grouping of specimens The specimens were assorted into four distinct groups based on the types of restoration, along with a control group consisting of 20 teeth. To enhance the clarity of the restorative applications and geometric preparation differences between the test groups, illustrative presentations were included (Figs. , and ). Post-core and full-contour crown group: post slot preparation and post placement A post drill of 1.2 mm diameter (Cytec Blanco; Hahnenkratt, Germany) created post preparations in the distal root canals, ensuring a minimum of 5 mm of gutta-percha remained at the apical end. Excess gutta-percha was removed, and the post-access cavity was cleansed with 2 mL of a 17% EDTA solution, then with alcohol, and dried using paper cones (Diadent Group International, South Korea). The EverStick Post 1.2 mm diameter (Stick Tech Ltd, Finland) was extracted from its packaging, checked for compatibility, and trimmed to allow 4 mm outside the canal. Single Bond Universal (3 M ESPE, St. Paul, MN, USA) was applied to the post and the canal walls, followed by air-drying and light-curing(Elipar S10 LED Curing Light, 3 M ESPE, St. Paul, MN, USA) for 10 s. MaxCem Elite (Kerr, USA), a dual-cure resin cement, was injected into the canal using a narrow tip to ensure precise application and minimize voids. The post was positioned into the canal, excess cement was removed, and light polymerization was performed for 40 s. After cementing the post in the root canal, the bonding agent (Single bond, 3 M ESPE, ABD) was applied to the pulpal walls and post surface, left for 20 s, air-dried, and then light-cured for 10 s. Flowable resin composite (3 M Filtek Ultimate; 3 M ESPE, St. Paul, USA) sealed the without post-placement mesial root canals’ entrances, and the height of the pulp chamber was maintained at 2 mm. Core restoration utilized universal posterior composite (3 M Filtek Z250; 3 M ESPE, USA), condensed incrementally and light-polymerized for 20 s per layer. Core structures were shaped using blunt-tipped diamond burs with an 8⁰-10⁰ taper angle under water cooling. The final design ensured at least 1 mm shoulder steps in the access cavity and maintained a 4 mm occlusal height from the margin (Fig. ). Core- full-contour crown group: access cavity preparation and core restoration The temporary filling was removed, and the pulp chamber was cleansed with alcohol. The chamber’s height was gauged with a periodontal probe, and any chambers exceeding 2 mm were sealed with flowable composite resin (3 M Filtek Ultimate; 3 M ESPE, St. Paul, USA). The base of the chamber was refined using a diamond fissure bur to a consistent depth of 2 mm. All tooth surfaces were etched with 37% phosphoric acid for 30 s, then rinsed and air-dried. A bonding agent was applied to the pulp chamber walls and access cavity step surfaces for 20 s, air-dried, and then light-cured for 10 s. The core was restorated in layers using universal posterior composite (3 M Filtek Z250; 3 M ESPE, St. Paul, USA), with each layer not exceeding 1 mm thick and light-cured for 20 s. Blunt-tipped diamond burs with an 8⁰-10⁰ taper were used for shaping underwater cooling, ensuring a minimum of 1 mm for access cavity steps and a 4 mm core height from the access cavity margin (Fig. ). Endocrown group: pulp chamber modification and endocrown access cavity preparation After removing the temporary filling, the pulp chamber was treated with alcohol. The chamber’s height was then assessed with a periodontal probe. In cases where the pulp chamber exceeded 2 mm in height, the canal orifices were sealed with flowable composite resin (3 M Filtek Ultimate; 3 M ESPE, St. Paul, USA). A diamond fissure bur was used to level the floor of the pulp chamber to a depth of 2 mm. The final shaping of the pulp chamber walls was done using blunt-tipped diamond burs with a taper angle between 8⁰-10⁰, employing a coarse-to-fine grain strategy. This ensured that the floor of the pulp chamber was smooth and the walls inclined occlusal at an angle of 8⁰-10⁰, maintaining a minimum coronal wall thickness of 2 mm (Fig. ).
This study utilized eighty permanent lower first and second molars extracted due to periodontal disease, ensuring they were free of caries, fractures, previous restorations, and had separated roots. Initially, these teeth were rinsed under tap water and cleaned using an ultrasonic scaler to remove hard and soft tissue debris. Teeth were selected based on their similar morphology, confirmed by measuring crown-root lengths, mesial-distal, and bucco-lingual widths using a digital caliper (Digital Caliper, CEN-TECH, Virginia, USA). Afterward, the teeth were stored in a thymol solution for the first 24 h and in distilled water at room temperature until experimentation.
The selected teeth were inspected for residual tissue or defects using a stereomicroscope (Leica EZ4 D, Leica Microsystems, Wetzlar, Germany) at 20x magnification. This ensured that all specimens met the inclusion criteria and were suitable for further procedures. To ensure a balanced distribution of the specimens into the test groups, the teeth were categorized based on their crown-root lengths, mesial-distal (MD) diameters, and bucco-lingual (BL) diameters. Teeth with similar dimensions were grouped to minimize variability. The distribution process involved the following steps: Measurement and Categorization : Each tooth’s crown-root length, MD diameter, and BL diameter were measured using a digital caliper. Teeth were then categorized into subgroups based on these measurements to ensure uniformity within each test group. Randomized Assignment : Within each subgroup, teeth were randomly assigned to one of the four test groups (Post-Core, Core- full-contour crown, Endocrown, and Control) to ensure an even distribution of specimen characteristics across all groups.
The crowns of the teeth were removed at the beginning. This was done by cutting 1 mm above the enamel-cementum junction, parallel to the occlusal surface, using a low-speed linear precision saw (Isomet 5000 Linear Precision Saw; Buehler, Illinois, USA) with water cooling. Following this, endodontic access cavities were created using a diamond fissure bur under water cooling. The pulp tissue was then extracted. Working lengths were determined using a #15 K-type root canal file (Dentsply Maillefer, Ballaigues, Switzerland) through a clockwise motion to the apical part of the root. Root canal shaping utilized the crown-down technique with rotary nickel-titanium files (ProTaper Next; Dentsply Maillefer, Ballaigues, Switzerland), finishing with the last X2 (#25.06) file for mesial root canals and X3 (#30.07) file for distal root canals as per manufacturer guidelines. Root canals were then obturated using AH Plus (Dentsply; De Trey Konstanz, Germany) paste and the lateral compaction technique with main and auxiliary gutta-percha cones (Dentsply Maillefer; Ballaigues, Switzerland and Diadent #25.02; Diadent Group International; Chongchong Buk Do, South Korea), corresponding to the final file used. Excess coronal gutta-percha was removed using a hot hand instrument. Blunt-tipped tapered diamond burs (Piranha Diamond, SS White, NJ, USA) were used for cavity preparation, aligning with the axial walls and an 8⁰-10⁰ taper angle towards the occlusal plane. The coronal outer wall thickness was maintained at a minimum of 2 mm. Standardization was ensured by measuring all access cavity wall thicknesses with a periodontal probe and digital caliper. The access cavity walls were then smoothed to remove sharp corners and edges (as illustrated in Fig. 3.3). After preparation, cavities were sealed with temporary restorative material (Cavit G; 3 M Espe, Seefeld, Germany) and stored in distilled water at room temperature until further processing. Grouping of specimens The specimens were assorted into four distinct groups based on the types of restoration, along with a control group consisting of 20 teeth. To enhance the clarity of the restorative applications and geometric preparation differences between the test groups, illustrative presentations were included (Figs. , and ).
The specimens were assorted into four distinct groups based on the types of restoration, along with a control group consisting of 20 teeth. To enhance the clarity of the restorative applications and geometric preparation differences between the test groups, illustrative presentations were included (Figs. , and ).
A post drill of 1.2 mm diameter (Cytec Blanco; Hahnenkratt, Germany) created post preparations in the distal root canals, ensuring a minimum of 5 mm of gutta-percha remained at the apical end. Excess gutta-percha was removed, and the post-access cavity was cleansed with 2 mL of a 17% EDTA solution, then with alcohol, and dried using paper cones (Diadent Group International, South Korea). The EverStick Post 1.2 mm diameter (Stick Tech Ltd, Finland) was extracted from its packaging, checked for compatibility, and trimmed to allow 4 mm outside the canal. Single Bond Universal (3 M ESPE, St. Paul, MN, USA) was applied to the post and the canal walls, followed by air-drying and light-curing(Elipar S10 LED Curing Light, 3 M ESPE, St. Paul, MN, USA) for 10 s. MaxCem Elite (Kerr, USA), a dual-cure resin cement, was injected into the canal using a narrow tip to ensure precise application and minimize voids. The post was positioned into the canal, excess cement was removed, and light polymerization was performed for 40 s. After cementing the post in the root canal, the bonding agent (Single bond, 3 M ESPE, ABD) was applied to the pulpal walls and post surface, left for 20 s, air-dried, and then light-cured for 10 s. Flowable resin composite (3 M Filtek Ultimate; 3 M ESPE, St. Paul, USA) sealed the without post-placement mesial root canals’ entrances, and the height of the pulp chamber was maintained at 2 mm. Core restoration utilized universal posterior composite (3 M Filtek Z250; 3 M ESPE, USA), condensed incrementally and light-polymerized for 20 s per layer. Core structures were shaped using blunt-tipped diamond burs with an 8⁰-10⁰ taper angle under water cooling. The final design ensured at least 1 mm shoulder steps in the access cavity and maintained a 4 mm occlusal height from the margin (Fig. ).
The temporary filling was removed, and the pulp chamber was cleansed with alcohol. The chamber’s height was gauged with a periodontal probe, and any chambers exceeding 2 mm were sealed with flowable composite resin (3 M Filtek Ultimate; 3 M ESPE, St. Paul, USA). The base of the chamber was refined using a diamond fissure bur to a consistent depth of 2 mm. All tooth surfaces were etched with 37% phosphoric acid for 30 s, then rinsed and air-dried. A bonding agent was applied to the pulp chamber walls and access cavity step surfaces for 20 s, air-dried, and then light-cured for 10 s. The core was restorated in layers using universal posterior composite (3 M Filtek Z250; 3 M ESPE, St. Paul, USA), with each layer not exceeding 1 mm thick and light-cured for 20 s. Blunt-tipped diamond burs with an 8⁰-10⁰ taper were used for shaping underwater cooling, ensuring a minimum of 1 mm for access cavity steps and a 4 mm core height from the access cavity margin (Fig. ).
After removing the temporary filling, the pulp chamber was treated with alcohol. The chamber’s height was then assessed with a periodontal probe. In cases where the pulp chamber exceeded 2 mm in height, the canal orifices were sealed with flowable composite resin (3 M Filtek Ultimate; 3 M ESPE, St. Paul, USA). A diamond fissure bur was used to level the floor of the pulp chamber to a depth of 2 mm. The final shaping of the pulp chamber walls was done using blunt-tipped diamond burs with a taper angle between 8⁰-10⁰, employing a coarse-to-fine grain strategy. This ensured that the floor of the pulp chamber was smooth and the walls inclined occlusal at an angle of 8⁰-10⁰, maintaining a minimum coronal wall thickness of 2 mm (Fig. ).
A collection of undamaged human teeth without any decay or prior treatment. Preparation of restorations with CAD/CAM System Scans for all specimens were taken using the Cerec Omnicam (Sirona Dental Systems, Bensheim, Germany). Each tooth was initially encased in silicone impression material (Optosil, Heraeus Kulzer, Germany), extending 2 mm beneath the enamel-cement junction to preserve the tooth’s contour. The restorative designs were crafted using CEREC Software 4.4.4 on a CEREC AC computer monitor (Sirona Dental Systems, Bensheim, Germany). The taper of the prepared cores was confirmed using the CAD software, which allowed precise measurement and verification of the taper angle to ensure it was within the 8⁰-10⁰ range. All restorations were milled with the CEREC MC XL milling unit (Sirona Dental Systems, Bensheim, Germany) using feldspathic porcelain blocks (VITABLOCS Mark II, Vita Zahnfabrik, Bad Säckingen, Germany). The enamel-cementum junction was standardized to the highest cusp tip distance to reflect the typical crown length of the lower permanent first molars and the minimum porcelain thickness for occlusal crown areas. This distance was set to 5.5 mm from the cervical band to the central fossa and 6.5 mm to the highest cusp tip. Measurements were verified in the software, ensuring they were parallel to the tooth’s long axis. The milling of each restoration block took approximately 12 min to complete, and the procedure was uniformly applied to 60 specimens. Bonding of restorations The adhesive used for bonding was Single Bond Universal (3 M ESPE, St. Paul, MN, USA), a single-component, light-cured adhesive. Its composition includes 10-methacryloyloxydecyl dihydrogen phosphate (MDP), bisphenol A glycidyl methacrylate (Bis-GMA), hydroxyethyl methacrylate (HEMA), dimethacrylate resins, ethanol, water, a photoinitiator, and silane. The adhesive treatment for restorative surfaces involved applying Single Bond Universal to the intaglio surface of the restorations and the prepared tooth surfaces, followed by air-drying and light-curing for 10 s. The bonding of the restorations was performed using a dual-cure resin cement (RelyX Ultimate, 3 M ESPE, St. Paul, MN, USA). Enamel surfaces were etched with 37% phosphoric acid for 30 s for all teeth, thoroughly rinsed with water for 20 s, and air-dried. A bonding agent was applied uniformly with a brush for 20 s, gently air-blown to remove any surplus, and light-cured for 10 s. Restorative surfaces received a roughening treatment with 9.5% hydrofluoric acid (Porcelain Etchant; Bisco, Illinois) for 40 s, rinsed for the same duration, and then air-dried. Silane (Ultradent Products, USA) was air-dried on these surfaces for 60 s. Maxcem Elite (Kerr, USA), a dual-cure resin cement, was dispensed using a special syringe onto the tooth’s bonding surface and the restoration’s attachment area. Restorations were carefully seated into the access cavity, and initial polymerization was done by applying finger pressure from the occlusal surface. After 3 s, polymerization was paused to facilitate excess cement removal. Any remaining excess was scraped away with a dental probe. The bonding was finalized with a 40-second light application to all restoration surfaces. Creating the periodontal ligament interval To simulate the periodontal ligament space, the roots of the teeth were first coated with molten baseplate wax. After the wax hardened, the teeth were positioned into silicone impression material (Optosil, Heraeus Kulzer, Germany). Impressions of the tooth roots were made, and then the teeth were removed. The hardened wax on the roots was melted off in hot water. The cleaned root surfaces were treated with a layer of impression tray adhesive (3 M ESPE; Seefeld, Germany) to facilitate the bonding of the polyether impression material. Once dry, Vaseline was applied to the silicone’s root surface impressions. The polyether impression material (Soft Monophase, 3 M ESPE, Ankara, Turkey) was then syringed into the impressions. Teeth were reinserted into their respective molds with gentle finger pressure until the impression material was set. After curing, any excess was trimmed away with a No. 15 scalpel, and the teeth were removed. For the periodontal ligament space simulation, teeth were embedded in quick-setting cold acrylic resin (Imicryl; Konya, Turkey). The embedding process utilized polyvinyl chloride (PVC) pipes measuring 2.5 cm in diameter and 3.5 cm in length as molds. Each tooth was positioned 1 mm below the enamel-cementum junction and aligned perpendicularly to its long axis. Aging process The aging process involved placing the specimens in a two-axis masticatory simulator (MOD Dental; Esetron, Ankara, Turkey), which is computer-controlled and consists of six experimental chambers. This system is designed with two motors to control horizontal and vertical movements and has features to electronically regulate temperature and water level for simultaneous thermal cycling and movement actions. The specimens underwent thermal cycling between 5 °C and 55 °C for 60 s each, with intervals of 12 s, totaling 5000 cycles. Stainless steel balls of 5 mm diameter were used to represent the opposing tooth in the aging process. Each specimen was subjected to a chewing force of 50 N, facilitated by 5 kg disks attached to each chamber. The simulator performed 250,000 vertical movements per specimen at a distance of 2 mm, with a speed of 50 mm/sec and a chewing frequency of 2.7 Hz. After completing the cycling in the simulator, the specimens were stored in distilled water at room temperature until the fracture test was conducted. The parameters for the simulator, based on Krejci et al.‘s methods, were selected to replicate one year of masticatory forces, applying a pressure of 50 N for 250,000 cycles. Fracture strength test The fracture strength of the specimens was evaluated using a Universal testing machine (Instron; Instron Corp, MA, USA). The samples were clamped into the machine with axes parallel to the ground. A 5 mm in diameter stainless steel indenter was positioned centrally on the occlusal surface of each restoration. A vertical force was applied perpendicularly to the occlusal plane at a 1 mm/min crosshead speed until fracture occurred. The peak force at the fracture point was recorded in Newtons (N). Any samples that withstood the maximum force capacity of the machine, 2000 N, without breaking were classified as ‘No Fracture.’ Analysis of fracture types Post-fracture examination of the specimens involved stereomicroscopic analysis (Leica EZ4 D, Leica Microsystems, Wetzlar, Germany), during which photographs of all tooth surfaces were taken. Fracture types were categorized into four distinct classifications : Type I: Adhesive failure occurred without any fracture to the tooth or restoration. Type II: The fracture was confined to the restoration alone. Type III: Both the tooth and restoration fractured, with the break occurring above the enamel-cementum junction. Type IV: Fracture of both the tooth and restoration occurred below the enamel-cementum junction. Fractures above the enamel-cementum junction were deemed “Restorable Fractures,” indicating the possibility of repair. In contrast, fractures below this junction were labeled “Unrestorable Fracture,” implying that the damage was too extensive for corrective measures. Statistical analysis of data The conformity of the data to a normal distribution was assessed using the Shapiro-Wilk test—non-parametric distribution led to applying the Kruskal-Wallis H test for comparing fracture strengths between different groups. Dunn’s test was subsequently used for pairwise group comparisons. Data was analyzed using IBM SPSS V23 (Chicago, IL, USA). The Chi-Square test was utilized to identify variations in fracture strength and types among the groups. Numerical data were presented as mean ± standard deviation, and categorical data were expressed in frequency (percentage). A p-value of less than 0.05 was considered statistically significant.
Scans for all specimens were taken using the Cerec Omnicam (Sirona Dental Systems, Bensheim, Germany). Each tooth was initially encased in silicone impression material (Optosil, Heraeus Kulzer, Germany), extending 2 mm beneath the enamel-cement junction to preserve the tooth’s contour. The restorative designs were crafted using CEREC Software 4.4.4 on a CEREC AC computer monitor (Sirona Dental Systems, Bensheim, Germany). The taper of the prepared cores was confirmed using the CAD software, which allowed precise measurement and verification of the taper angle to ensure it was within the 8⁰-10⁰ range. All restorations were milled with the CEREC MC XL milling unit (Sirona Dental Systems, Bensheim, Germany) using feldspathic porcelain blocks (VITABLOCS Mark II, Vita Zahnfabrik, Bad Säckingen, Germany). The enamel-cementum junction was standardized to the highest cusp tip distance to reflect the typical crown length of the lower permanent first molars and the minimum porcelain thickness for occlusal crown areas. This distance was set to 5.5 mm from the cervical band to the central fossa and 6.5 mm to the highest cusp tip. Measurements were verified in the software, ensuring they were parallel to the tooth’s long axis. The milling of each restoration block took approximately 12 min to complete, and the procedure was uniformly applied to 60 specimens.
The adhesive used for bonding was Single Bond Universal (3 M ESPE, St. Paul, MN, USA), a single-component, light-cured adhesive. Its composition includes 10-methacryloyloxydecyl dihydrogen phosphate (MDP), bisphenol A glycidyl methacrylate (Bis-GMA), hydroxyethyl methacrylate (HEMA), dimethacrylate resins, ethanol, water, a photoinitiator, and silane. The adhesive treatment for restorative surfaces involved applying Single Bond Universal to the intaglio surface of the restorations and the prepared tooth surfaces, followed by air-drying and light-curing for 10 s. The bonding of the restorations was performed using a dual-cure resin cement (RelyX Ultimate, 3 M ESPE, St. Paul, MN, USA). Enamel surfaces were etched with 37% phosphoric acid for 30 s for all teeth, thoroughly rinsed with water for 20 s, and air-dried. A bonding agent was applied uniformly with a brush for 20 s, gently air-blown to remove any surplus, and light-cured for 10 s. Restorative surfaces received a roughening treatment with 9.5% hydrofluoric acid (Porcelain Etchant; Bisco, Illinois) for 40 s, rinsed for the same duration, and then air-dried. Silane (Ultradent Products, USA) was air-dried on these surfaces for 60 s. Maxcem Elite (Kerr, USA), a dual-cure resin cement, was dispensed using a special syringe onto the tooth’s bonding surface and the restoration’s attachment area. Restorations were carefully seated into the access cavity, and initial polymerization was done by applying finger pressure from the occlusal surface. After 3 s, polymerization was paused to facilitate excess cement removal. Any remaining excess was scraped away with a dental probe. The bonding was finalized with a 40-second light application to all restoration surfaces.
To simulate the periodontal ligament space, the roots of the teeth were first coated with molten baseplate wax. After the wax hardened, the teeth were positioned into silicone impression material (Optosil, Heraeus Kulzer, Germany). Impressions of the tooth roots were made, and then the teeth were removed. The hardened wax on the roots was melted off in hot water. The cleaned root surfaces were treated with a layer of impression tray adhesive (3 M ESPE; Seefeld, Germany) to facilitate the bonding of the polyether impression material. Once dry, Vaseline was applied to the silicone’s root surface impressions. The polyether impression material (Soft Monophase, 3 M ESPE, Ankara, Turkey) was then syringed into the impressions. Teeth were reinserted into their respective molds with gentle finger pressure until the impression material was set. After curing, any excess was trimmed away with a No. 15 scalpel, and the teeth were removed. For the periodontal ligament space simulation, teeth were embedded in quick-setting cold acrylic resin (Imicryl; Konya, Turkey). The embedding process utilized polyvinyl chloride (PVC) pipes measuring 2.5 cm in diameter and 3.5 cm in length as molds. Each tooth was positioned 1 mm below the enamel-cementum junction and aligned perpendicularly to its long axis.
The aging process involved placing the specimens in a two-axis masticatory simulator (MOD Dental; Esetron, Ankara, Turkey), which is computer-controlled and consists of six experimental chambers. This system is designed with two motors to control horizontal and vertical movements and has features to electronically regulate temperature and water level for simultaneous thermal cycling and movement actions. The specimens underwent thermal cycling between 5 °C and 55 °C for 60 s each, with intervals of 12 s, totaling 5000 cycles. Stainless steel balls of 5 mm diameter were used to represent the opposing tooth in the aging process. Each specimen was subjected to a chewing force of 50 N, facilitated by 5 kg disks attached to each chamber. The simulator performed 250,000 vertical movements per specimen at a distance of 2 mm, with a speed of 50 mm/sec and a chewing frequency of 2.7 Hz. After completing the cycling in the simulator, the specimens were stored in distilled water at room temperature until the fracture test was conducted. The parameters for the simulator, based on Krejci et al.‘s methods, were selected to replicate one year of masticatory forces, applying a pressure of 50 N for 250,000 cycles.
The fracture strength of the specimens was evaluated using a Universal testing machine (Instron; Instron Corp, MA, USA). The samples were clamped into the machine with axes parallel to the ground. A 5 mm in diameter stainless steel indenter was positioned centrally on the occlusal surface of each restoration. A vertical force was applied perpendicularly to the occlusal plane at a 1 mm/min crosshead speed until fracture occurred. The peak force at the fracture point was recorded in Newtons (N). Any samples that withstood the maximum force capacity of the machine, 2000 N, without breaking were classified as ‘No Fracture.’
Post-fracture examination of the specimens involved stereomicroscopic analysis (Leica EZ4 D, Leica Microsystems, Wetzlar, Germany), during which photographs of all tooth surfaces were taken. Fracture types were categorized into four distinct classifications : Type I: Adhesive failure occurred without any fracture to the tooth or restoration. Type II: The fracture was confined to the restoration alone. Type III: Both the tooth and restoration fractured, with the break occurring above the enamel-cementum junction. Type IV: Fracture of both the tooth and restoration occurred below the enamel-cementum junction. Fractures above the enamel-cementum junction were deemed “Restorable Fractures,” indicating the possibility of repair. In contrast, fractures below this junction were labeled “Unrestorable Fracture,” implying that the damage was too extensive for corrective measures.
The conformity of the data to a normal distribution was assessed using the Shapiro-Wilk test—non-parametric distribution led to applying the Kruskal-Wallis H test for comparing fracture strengths between different groups. Dunn’s test was subsequently used for pairwise group comparisons. Data was analyzed using IBM SPSS V23 (Chicago, IL, USA). The Chi-Square test was utilized to identify variations in fracture strength and types among the groups. Numerical data were presented as mean ± standard deviation, and categorical data were expressed in frequency (percentage). A p-value of less than 0.05 was considered statistically significant.
The study systematically presents the median, mean fracture strength, standard deviation, 95% confidence interval, and minimum and maximum values range for each group in Table and illustrated in Fig. , with a significance level set at p < 0.05. Statistical analysis revealed that the control group exhibited the highest mean fracture strength at 1830 ± 277 N, while the Core- Full-Contour Crown group had the lowest at 1532 ± 371 N. The Kruskal-Wallis test indicated significant differences in fracture strengths among the groups ( p = 0.021). A specific comparison showed a statistically significant difference in fracture strength between the Core- Full-Contour Crown group (1532 ± 371 N) and the control group (1830 ± 277 N), with a p-value less than 0.05. However, no significant difference was observed when comparing the fracture strengths of the Core- Full-Contour Crown group (1532 ± 371 N) with the Post- Full-Contour Crown (1678 ± 279 N) and Endocrown (1679 ± 306 N) groups p < 0.05. Likewise, no significant difference in fracture strength was found between the control group (1830 ± 277 N) and both the Post- Full-Contour Crown (1678 ± 279 N) and Endocrown (1679 ± 306 N) groups, as well as between the Post- Full-Contour Crown and Endocrown groups themselves p > 0.05. For a detailed account of how the groups compared in terms of fracture strength, refer to Table , which offers a comprehensive pairwise comparison. The results of the fracture strength test, analyzed using the Chi-Square test, revealed significant differences in failure types among the experimental groups (χ2 = 26.886, df = 9, P = 0.001). Within these groups, the Core-Full-Contour Crown group showed distinct failure-type percentages compared to others, with these statistically significant differences ( p < 0.05). Type-2 failures were the most prevalent, accounting for 33.75% of the observed failures across all groups. Conversely, Type-3 fractures were less common, with a frequency of only 12.5%. The control group had the highest percentage of intact specimens at 45%. Restorable fractures occurred in 25% of cases within this group, while non-restorable fractures comprised 30%. In the Endocrown group, Type-4 fractures were the predominant failure type, representing 40% of the cases, with Type-2 fractures occurring least frequently at 15%. Additionally, this group had a considerable percentage of intact specimens, tallying at 25%. The Core-Full-Contour Crown group predominantly exhibited restorable fracture types, with 60% being Type-2 and 25% being Type-3, and a minimal 5% accounted for unrestorable Type-4 fractures. The Post-Full-Contour Crown group was notable for the highest incidence of Type-4 fractures at 50% and the lowest of Type-3 fractures at just 5%. The frequencies (percentages) of failure types observed in all groups are presented in Table . These distributions of failure types and their rates following fracture testing are graphically represented in Fig. . Examples of fracture types are given in Figs. , and .
Endodontically treated teeth often suffer substantial material loss from previous restorative procedures, traumatic injury, or decay. Such depletion of tooth structure poses a complex challenge for beneficial treatment planning, potentially affecting the longevity and effectiveness of the dental work . This study evaluated the fracture resistance of various restorations fabricated using CAD/CAM technology, particularly in teeth that have received root canal treatment and exhibit extensive crown destruction. Endocrowns offer a conservative restoration option by utilizing the pulp chamber and remaining tooth structure for retention, thus preserving more of the natural tooth compared to traditional post and core restorations . The choice between using a post, an endocrown, or a conventional Full-Contour Crown depends on various clinical factors, such as the remaining and quality of tooth structure. When greater retention is required, the use of a post may be indicated. Additionally, there are various options for fiber posts, including milled posts, which can be considered based on the specific clinical scenario . The study focused on evaluating the fracture resistance and failure patterns of different restorative approaches in teeth with severe crown damage. Endocrowns utilize the pulp chamber and remaining coronal structure for retention, which helps preserve more of the natural tooth and reduces the risk of complications such as root perforation . The full-contour crowns in our study were fabricated from feldspathic porcelain due to their superior aesthetic properties and CAD/CAM compatibility. However, alternative materials such as Porcelain-Fused-to-Metal (PFM) or Zirconia could yield different outcomes. PFM crowns combine the strength of metal with the aesthetics of porcelain, potentially offering higher fracture resistance but with a more complex manufacturing process. Zirconia crowns are known for their exceptional strength and durability, which could enhance fracture resistance further. Future studies should explore the performance of these materials under similar conditions to provide a broader understanding of their clinical applications. Various strategies are employed to rehabilitate teeth that have undergone significant crown damage due to endodontic treatment, with continual improvements being made in these techniques. The application of a post is a widely adopted method. Some authors advocate that inserting a post reinforces the tooth’s structure after root canal therapy . Conversely, a viewpoint suggests that creating a space for the post may compromise the root’s integrity, potentially leading to fractures . Complications such as loss of post-retention, post-fracture, root perforation, and fractures extending from the full-contour crown to the root are not uncommon in post-full-contour crown restored teeth . The choice between using a fiber post or a metal post depends on various clinical factors. Fiber posts are often preferred due to their ability to distribute stress more evenly and their lower modulus of elasticity, which is closer to that of dentin, thereby reducing the risk of root fractures. In contrast, while providing higher rigidity, metal posts can lead to stress concentrations and an increased risk of root fractures. Previous studies have supported the use of fiber posts in scenarios where preserving the structural integrity of the tooth is critical . The study found no significant difference in fracture strength between the post- full-contour crown and endocrown restorations, suggesting that both methods provide similar resistance to fracture under static loading conditions. This finding aligns with Carvalho et al. , who reported comparable fracture strength between crowns with composite resin cores and endocrowns. Additionally, Biacchi and Basting observed higher fracture resistance in endocrowns compared to post and full-contour crown restorations when force was applied obliquely, highlighting the potential benefits of endocrowns in specific loading scenarios. The ferrule effect refers to the encircling of 1–2 mm of tooth structure by the crown, which can significantly increase the fracture resistance of the restored tooth. In this study, the minimum wall thickness was maintained at 2 mm to ensure adequate strength and support for the restoration. The presence of a ferrule is crucial in enhancing the longevity of the restoration by providing additional mechanical support and reducing the risk of fracture . The necessity of using a post depends on the amount of remaining tooth structure and the clinical scenario. Posts can provide additional retention for the core material and the final restoration, especially in cases with significant loss of coronal structure. However, in situations where sufficient tooth structure remains, endocrowns or other restorative options may be preferred to minimize the risk of root fractures and other complications associated with post-placement . The control group, representing intact teeth, exhibited the highest fracture resistance, which was expected due to the unaltered natural tooth structure. The Core-Full-Contour Crown group demonstrated the lowest fracture resistance, which may be attributed to the composite resin core material’s properties and the interface between the core and the crown, which could be potential weak points. The Post-Core and Endocrown groups showed intermediate fracture resistance values, indicating that both methods offer similar reinforcement to the tooth structure. Reasons for observed results Control Group: The highest fracture resistance in the control group can be attributed to the intact natural tooth structure, which inherently provides superior strength and integrity. Core-Full-Contour Crown Group: The lower fracture resistance observed in this group could be due to the composite resin’s mechanical properties and potential stress concentrations at the core-full-contour crown interface. Post-Core and Full-Contour Crown Group and Endocrown Groups: The similar fracture resistance between these groups suggests that both post-core and endocrown restorations effectively reinforce the tooth structure. However, the slight variations may be due to differences in stress distribution and the bonding interface. Implications for Clinical Practice: These findings highlight the importance of choosing the appropriate restoration method based on the specific clinical scenario. While both post-core and Full-Contour Crown and endocrown restorations offer viable solutions for teeth with extensive crown damage, the decision should consider factors such as remaining tooth structure, ferule presence, and stress distribution potential. In light of these complications, mainly when dealing with teeth that have lost a significant amount of structure, endocrown restorations have been presented as a viable alternative. Endocrowns offer a conservative restoration option by utilizing the pulp chamber and remaining tooth structure for retention, thus preserving more of the natural tooth compared to traditional post and core restorations. Endocrowns are designed to maximize retention through two mechanisms: macromechanical retention, achieved by engaging the inner walls of the pulp chamber and the access cavity margins, and micromechanical retention, which is the initial stage of dental adhesion involving the physical interlocking of the adhesive with the tooth structure, before the chemical bonding process . The appeal of endocrowns among dental professionals stems from their conservative nature, preserving more of the tooth’s natural structure, and their reduced restoration time efficiency . In their research, Carvalho et al. compared the fracture strength of ceramic full-contour crowns reinforced with lithium disilicate on mandibular molars with differing heights of composite resin cores and endocrowns crafted from the same material. Their study assessed both dynamic and static loads. They observed no notable fracture strength difference among the restoration types under dynamic loading conditions. Yet, when the static load was applied, full-contour crowns with a lower core height demonstrated greater fracture resistance than those with a higher core height and endocrowns. The difference in fracture strength between the higher core height full-contour crowns and the endocrowns was not statistically significant. These findings agree with the current study’s results, which also show comparable fracture strength between crowns with a higher composite resin core and endocrowns, aligning with the trends reported by Carvalho et al. . Biacchi and Basting assessed the fracture strength of fiber post and full-contour crown restorations versus endocrowns in mandibular premolars, applying force obliquely at a 135° angle to the tooth’s long axis. Their findings suggested that endocrowns had a higher fracture resistance when compared to the post and full-contour crown restorations. In the current study, however, no significant difference was observed in the fracture strength between the post-crown and endocrown restorations. The discrepancy in results between the present study and that of Biacchi and Basting could be attributed to the type of post utilized and the direction in which the force was applied . In the study by Salameh et al. , the impact of using post-core and post-free composite full-contour crowns on the fracture strength of restorations was examined, with zirconium emerging as the preferred material for full-contour crowns. The research found that restorations involving a post-core setup displayed superior fracture strength. The differences noted between the results of our study and those of Salameh et al. could likely be due to the different materials selected for fabricating the crowns. Upon completing the fracture strength test in our study, the distribution of fracture types provided insightful data. In the Post-Core and Full-Contour Crown group, half of the specimens endured Type IV fractures. The Endocrown group demonstrated a slightly lower percentage of Type IV fractures, with 40% of the models affected, and a notable 25% remained intact even when subjected to the maximal axial force applied. The Post-Core and Full-Contour Crown group primarily exhibited Type II and Type III fractures, accounting for 85% of the fractures within this group . The findings and reviewed literature suggest that endocrowns offer a reliable and conservative option for restoring endodontically treated teeth with significant crown damage. Their ability to preserve more of the natural tooth structure and provide comparable fracture resistance to traditional methods makes them a valuable option in clinical practice. However, the choice between endocrowns and traditional post and core restorations should consider individual case characteristics, including the amount of remaining tooth structure and the anticipated loading conditions. In their investigation, Rocca et al. explored the fracture strength and types of failures in resin nanoceramic full-contour crowns and endocrowns with various modifications. They found a high incidence of irreversible root fractures in endocrown specimens. Conversely, the composite resin core and full-contour crown restorations predominantly exhibited reversible fractures. This study aligns with these findings to some extent, revealing that most fractures in the core-full-contour crown group were restorable. In the study by Abu Helal and Wang , a finite element analysis was used to compare the biomechanical behaviors of endocrowns versus fiber post and full-contour crown restorations in mandibular molars. Their research indicated that endocrowns placed less strain on the root dentin and were deemed more biomechanically favorable, especially in lower first molars.The observations from this study align with these results, suggesting post-restorations may induce more significant stress within the root dentin. Furthermore, when examining the types of fractures that occurred, the study found that Type IV fractures – those below the enamel-cement junction deemed unrestorable – were present in 40% of the endocrown group. This contrasts with a 50% occurrence of Type IV fractures in the Post-crown group, indicating a potential difference in the distribution of severe fractures between the two restoration methods. Occlusal forces during standard functions like chewing are generally within the range of 40–80 N . However, individuals with parafunctional habits, such as bruxism or occlusal loading, can exhibit much higher forces, reaching up to 570 N in the anterior region and 910 N in the posterior part . These figures highlight those natural teeth and dental restorations must be robust enough to withstand significant forces in the oral cavity. The results of the study under discussion suggest that a variety of dental restorations when subjected to axial forces, demonstrate mean fracture strengths capable of withstanding the forces commonly experienced in the mouth. This resilience is noted in restorations done using post and core methods and endocrown treatments. The findings underscore the importance of considering these forces in designing and selecting dental restorations to ensure their durability and functionality over time. The adoption of CAD/CAM technology in dental clinics is on the rise, appreciated for its ability to reduce the margin of error in restorations and enable rapid production . This system’s use of pre-manufactured etching blocks also helps avoid inconsistencies in material properties that could arise during the restoration creation process. The primary goals driving the development of CAD/CAM systems are to provide high-quality restorations from these prefabricated blocks, standardize the restoration shaping process for consistency, and reduce the overall costs of production .
Control Group: The highest fracture resistance in the control group can be attributed to the intact natural tooth structure, which inherently provides superior strength and integrity. Core-Full-Contour Crown Group: The lower fracture resistance observed in this group could be due to the composite resin’s mechanical properties and potential stress concentrations at the core-full-contour crown interface. Post-Core and Full-Contour Crown Group and Endocrown Groups: The similar fracture resistance between these groups suggests that both post-core and endocrown restorations effectively reinforce the tooth structure. However, the slight variations may be due to differences in stress distribution and the bonding interface. Implications for Clinical Practice: These findings highlight the importance of choosing the appropriate restoration method based on the specific clinical scenario. While both post-core and Full-Contour Crown and endocrown restorations offer viable solutions for teeth with extensive crown damage, the decision should consider factors such as remaining tooth structure, ferule presence, and stress distribution potential. In light of these complications, mainly when dealing with teeth that have lost a significant amount of structure, endocrown restorations have been presented as a viable alternative. Endocrowns offer a conservative restoration option by utilizing the pulp chamber and remaining tooth structure for retention, thus preserving more of the natural tooth compared to traditional post and core restorations. Endocrowns are designed to maximize retention through two mechanisms: macromechanical retention, achieved by engaging the inner walls of the pulp chamber and the access cavity margins, and micromechanical retention, which is the initial stage of dental adhesion involving the physical interlocking of the adhesive with the tooth structure, before the chemical bonding process . The appeal of endocrowns among dental professionals stems from their conservative nature, preserving more of the tooth’s natural structure, and their reduced restoration time efficiency . In their research, Carvalho et al. compared the fracture strength of ceramic full-contour crowns reinforced with lithium disilicate on mandibular molars with differing heights of composite resin cores and endocrowns crafted from the same material. Their study assessed both dynamic and static loads. They observed no notable fracture strength difference among the restoration types under dynamic loading conditions. Yet, when the static load was applied, full-contour crowns with a lower core height demonstrated greater fracture resistance than those with a higher core height and endocrowns. The difference in fracture strength between the higher core height full-contour crowns and the endocrowns was not statistically significant. These findings agree with the current study’s results, which also show comparable fracture strength between crowns with a higher composite resin core and endocrowns, aligning with the trends reported by Carvalho et al. . Biacchi and Basting assessed the fracture strength of fiber post and full-contour crown restorations versus endocrowns in mandibular premolars, applying force obliquely at a 135° angle to the tooth’s long axis. Their findings suggested that endocrowns had a higher fracture resistance when compared to the post and full-contour crown restorations. In the current study, however, no significant difference was observed in the fracture strength between the post-crown and endocrown restorations. The discrepancy in results between the present study and that of Biacchi and Basting could be attributed to the type of post utilized and the direction in which the force was applied . In the study by Salameh et al. , the impact of using post-core and post-free composite full-contour crowns on the fracture strength of restorations was examined, with zirconium emerging as the preferred material for full-contour crowns. The research found that restorations involving a post-core setup displayed superior fracture strength. The differences noted between the results of our study and those of Salameh et al. could likely be due to the different materials selected for fabricating the crowns. Upon completing the fracture strength test in our study, the distribution of fracture types provided insightful data. In the Post-Core and Full-Contour Crown group, half of the specimens endured Type IV fractures. The Endocrown group demonstrated a slightly lower percentage of Type IV fractures, with 40% of the models affected, and a notable 25% remained intact even when subjected to the maximal axial force applied. The Post-Core and Full-Contour Crown group primarily exhibited Type II and Type III fractures, accounting for 85% of the fractures within this group . The findings and reviewed literature suggest that endocrowns offer a reliable and conservative option for restoring endodontically treated teeth with significant crown damage. Their ability to preserve more of the natural tooth structure and provide comparable fracture resistance to traditional methods makes them a valuable option in clinical practice. However, the choice between endocrowns and traditional post and core restorations should consider individual case characteristics, including the amount of remaining tooth structure and the anticipated loading conditions. In their investigation, Rocca et al. explored the fracture strength and types of failures in resin nanoceramic full-contour crowns and endocrowns with various modifications. They found a high incidence of irreversible root fractures in endocrown specimens. Conversely, the composite resin core and full-contour crown restorations predominantly exhibited reversible fractures. This study aligns with these findings to some extent, revealing that most fractures in the core-full-contour crown group were restorable. In the study by Abu Helal and Wang , a finite element analysis was used to compare the biomechanical behaviors of endocrowns versus fiber post and full-contour crown restorations in mandibular molars. Their research indicated that endocrowns placed less strain on the root dentin and were deemed more biomechanically favorable, especially in lower first molars.The observations from this study align with these results, suggesting post-restorations may induce more significant stress within the root dentin. Furthermore, when examining the types of fractures that occurred, the study found that Type IV fractures – those below the enamel-cement junction deemed unrestorable – were present in 40% of the endocrown group. This contrasts with a 50% occurrence of Type IV fractures in the Post-crown group, indicating a potential difference in the distribution of severe fractures between the two restoration methods. Occlusal forces during standard functions like chewing are generally within the range of 40–80 N . However, individuals with parafunctional habits, such as bruxism or occlusal loading, can exhibit much higher forces, reaching up to 570 N in the anterior region and 910 N in the posterior part . These figures highlight those natural teeth and dental restorations must be robust enough to withstand significant forces in the oral cavity. The results of the study under discussion suggest that a variety of dental restorations when subjected to axial forces, demonstrate mean fracture strengths capable of withstanding the forces commonly experienced in the mouth. This resilience is noted in restorations done using post and core methods and endocrown treatments. The findings underscore the importance of considering these forces in designing and selecting dental restorations to ensure their durability and functionality over time. The adoption of CAD/CAM technology in dental clinics is on the rise, appreciated for its ability to reduce the margin of error in restorations and enable rapid production . This system’s use of pre-manufactured etching blocks also helps avoid inconsistencies in material properties that could arise during the restoration creation process. The primary goals driving the development of CAD/CAM systems are to provide high-quality restorations from these prefabricated blocks, standardize the restoration shaping process for consistency, and reduce the overall costs of production .
• Fracture Strength and Patterns : None of the restoration groups were stronger than intact teeth. • Core-Full Contour Crown Restorations : Core-full contour crown restorations had the lowest fracture resistance. • Fracture types : Type II fractures were most common. • Recommendation on endocrowns Based on the findings of the present study, endocrowns cannot be recommended over post-crown applications, as the difference is not statistically significant.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3
|
Remote screening of diabetic retinopathy through a community-wide teleophthalmology program in Mumbai | 5735d4b8-51ea-4ada-a43b-02bb22642689 | 9359295 | Ophthalmology[mh] | Nil.
There are no conflicts of interest.
|
Temporal and spatial heterogeneity of HER2 status in metastatic colorectal cancer | 8494f71c-c1c0-4dc8-8856-9b542e5b59be | 11193188 | Anatomy[mh] | Colorectal cancer (CRC) is the third most common cancer and the second leading cause of cancer related death worldwide with nearly 2 million new cases diagnosed and about 1 million death per year . Almost 50% of CRC patients will develop liver metastases and less than a third will be candidates for surgical resection . The management of metastatic colorectal cancer (mCRC) depends on the resectability of the metastases, the patient’s condition and the tumor molecular features. In many cases, several biomarkers, such as KRAS , NRAS , BRAF and MisMatch Repair (MMR) status, are routinely assessed to adapt the therapeutic strategy . Recently, the role of human epidermal growth factor 2 (HER2) as a new target has emerged in mCRC. HER2 is a strong oncogenic driver and trastuzumab, the first monoconal antibody blocking HER2, has become the standard treatment for HER2-positive advanced gastric cancer overexpressing HER2 . In mCRC, several phase II clinical trials have demonstrated the efficacy and tolerability of different dual HER2-targeted therapies . However, this clinical efficacy was optimal in patients without RAS mutations . More recently, a clinical trial evaluating trastuzumab conjugated to deruxtecan, a topoisomerase inhibitor, has shown promising activity in mCRC, irrespective of RAS mutation status . In these trials, patient recruitment is mainly based on immunohistochemistry and in situ hybridization. Indeed, in CRC, a specific HER2 scoring system, relying on these two techniques has been developed to provide an identification of CRC patients eligible in clinical trials . Moreover, HER2 amplification has been associated to resistance to anti-EGFR treatment in wild-type RAS and BRAF mCRC. In this setting, it is necessary to provide an accurate assessment of HER2 status. It can be challenging in cases where tumors show a heterogeneous expression of HER2 regarding different locations. Thus, in breast cancer and gastric cancer, it has been described that these situations can lead to discrepancies in HER2 status between primary tumors and metastases . In CRC, only few studies are available regarding HER2 heterogeneity. Moreover, most of them have been based on different scoring systems, with series including various number of cases . In addition, spatial and temporal heterogeneity has never been precisely described . Thus, the aim of this study was to compare the HER2 status between primary CRC and their corresponding liver metastases. Patients Patients who were operated for a primary CRC and underwent synchronous or metachronous liver metastases resection in the digestive surgery department of Besançon University Hospital, between April 1999 and October 2021, were selected for this study. Tissue microarray manufacturing Tissue microarrays (TMA) were constructed from the most representative primary CRC and corresponding liver metastasis formalin-fixed paraffin embedded (FFPE) blocks. The punch's diameter was 1 mm and each tumor had three TMA spots. In addition, a supplementary TMA was built from the multiple synchronous or metachronous liver metastases present in the same patient. Determination of HER2 Status HER2 Immunohistochemistry HER2 immunohistochemistry (IHC) was initially assessed using 4 µm sections of TMA blocks. Immunostaining was performed on the Ventana Benchmark automatic immunostainer® (Roche diagnostics, Meylan, France), using a VENTANA anti-HER2/neu® (4B5) rabbit monoclonal primary antibody, according to the manufacturer’s instruction. In each section, there were external positive controls. HER2 status of IHC staining was assessed according to Valtorta et al. . It was defined as negative (0 no staining, 1 + faint staining regardless of cellularity, 2 + moderate staining with < 50% positive cells and 3 + intense staining with ≤ 10% positive cells), equivocal (2 + moderate staining with ≥ 50% of positive cells) and positive (3 + intense staining with > 10% positive cells) and scored by two pathologists. In cases of discrepancy, consensus was reached by reviewing cases where the pathologists’ interpretations initially differed. Validation of TMA method for HER2 screening To evaluate the reliability of the TMA method, an additional HER2 IHC on whole slides (WS) was performed for TMA spots with HER2 score of 1 + , 2 + , 3 + , as well as 10 randomly selected TMA spots IHC score of 0. HER2 fluorescent in situ hybridization Fluorescent in situ hybridization (FISH) was performed on WS CRC with an equivocal (2 + with ≥ 50% off positive cells) or positive (3 + with > 10% positive cells) HER2 IHC status. FISH using ZytoLight® SPEC ERBB2/CEN17 Dual Color Probe Kit (CliniSciences, Nanterre, France) according to the manufacturer’s instruction was used to assess HER2 amplification. The scoring and evaluation were performed by counting ERBB2 and CEN17 signals from 100 non-overlapping nuclei core in tumor regions. Tumors with a ratio ERBB2/CEN17 ≥ 2 were considered amplified and otherwise were considered non-amplified . Patients’ characteristics Clinical parameters were retrospectively collected by review of the medical files. These parameters included age, gender, WHO Performance Status at the diagnosis, neoadjuvant and/or adjuvant treatment, anatomical site and TNM stage according to UICC 8th edition. The histological and molecular parameters collected included CRC histological type and grade according to the 2019 WHO Classification, lymphovascular and perineural invasion, lymph node status, MMR status and KRAS , NRAS and BRAF status. Statistical analysis The HER2 IHC status in the primary tumor and corresponding liver metastases were expressed as percentages with 95% confidence interval (CI) and concordance was assessed using the Cohen’s kappa coefficient. The statistical analysis was performed with R software v.4.0.2. Ethics The project was approved by the scientific board of the Regional Biobank of Franche-Comté, France (BB-0033–00024) ensuring patients’ informed consent. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki (6th revision, 2008). Patients who were operated for a primary CRC and underwent synchronous or metachronous liver metastases resection in the digestive surgery department of Besançon University Hospital, between April 1999 and October 2021, were selected for this study. Tissue microarrays (TMA) were constructed from the most representative primary CRC and corresponding liver metastasis formalin-fixed paraffin embedded (FFPE) blocks. The punch's diameter was 1 mm and each tumor had three TMA spots. In addition, a supplementary TMA was built from the multiple synchronous or metachronous liver metastases present in the same patient. HER2 Immunohistochemistry HER2 immunohistochemistry (IHC) was initially assessed using 4 µm sections of TMA blocks. Immunostaining was performed on the Ventana Benchmark automatic immunostainer® (Roche diagnostics, Meylan, France), using a VENTANA anti-HER2/neu® (4B5) rabbit monoclonal primary antibody, according to the manufacturer’s instruction. In each section, there were external positive controls. HER2 status of IHC staining was assessed according to Valtorta et al. . It was defined as negative (0 no staining, 1 + faint staining regardless of cellularity, 2 + moderate staining with < 50% positive cells and 3 + intense staining with ≤ 10% positive cells), equivocal (2 + moderate staining with ≥ 50% of positive cells) and positive (3 + intense staining with > 10% positive cells) and scored by two pathologists. In cases of discrepancy, consensus was reached by reviewing cases where the pathologists’ interpretations initially differed. Validation of TMA method for HER2 screening To evaluate the reliability of the TMA method, an additional HER2 IHC on whole slides (WS) was performed for TMA spots with HER2 score of 1 + , 2 + , 3 + , as well as 10 randomly selected TMA spots IHC score of 0. HER2 fluorescent in situ hybridization Fluorescent in situ hybridization (FISH) was performed on WS CRC with an equivocal (2 + with ≥ 50% off positive cells) or positive (3 + with > 10% positive cells) HER2 IHC status. FISH using ZytoLight® SPEC ERBB2/CEN17 Dual Color Probe Kit (CliniSciences, Nanterre, France) according to the manufacturer’s instruction was used to assess HER2 amplification. The scoring and evaluation were performed by counting ERBB2 and CEN17 signals from 100 non-overlapping nuclei core in tumor regions. Tumors with a ratio ERBB2/CEN17 ≥ 2 were considered amplified and otherwise were considered non-amplified . HER2 immunohistochemistry (IHC) was initially assessed using 4 µm sections of TMA blocks. Immunostaining was performed on the Ventana Benchmark automatic immunostainer® (Roche diagnostics, Meylan, France), using a VENTANA anti-HER2/neu® (4B5) rabbit monoclonal primary antibody, according to the manufacturer’s instruction. In each section, there were external positive controls. HER2 status of IHC staining was assessed according to Valtorta et al. . It was defined as negative (0 no staining, 1 + faint staining regardless of cellularity, 2 + moderate staining with < 50% positive cells and 3 + intense staining with ≤ 10% positive cells), equivocal (2 + moderate staining with ≥ 50% of positive cells) and positive (3 + intense staining with > 10% positive cells) and scored by two pathologists. In cases of discrepancy, consensus was reached by reviewing cases where the pathologists’ interpretations initially differed. To evaluate the reliability of the TMA method, an additional HER2 IHC on whole slides (WS) was performed for TMA spots with HER2 score of 1 + , 2 + , 3 + , as well as 10 randomly selected TMA spots IHC score of 0. in situ hybridization Fluorescent in situ hybridization (FISH) was performed on WS CRC with an equivocal (2 + with ≥ 50% off positive cells) or positive (3 + with > 10% positive cells) HER2 IHC status. FISH using ZytoLight® SPEC ERBB2/CEN17 Dual Color Probe Kit (CliniSciences, Nanterre, France) according to the manufacturer’s instruction was used to assess HER2 amplification. The scoring and evaluation were performed by counting ERBB2 and CEN17 signals from 100 non-overlapping nuclei core in tumor regions. Tumors with a ratio ERBB2/CEN17 ≥ 2 were considered amplified and otherwise were considered non-amplified . Clinical parameters were retrospectively collected by review of the medical files. These parameters included age, gender, WHO Performance Status at the diagnosis, neoadjuvant and/or adjuvant treatment, anatomical site and TNM stage according to UICC 8th edition. The histological and molecular parameters collected included CRC histological type and grade according to the 2019 WHO Classification, lymphovascular and perineural invasion, lymph node status, MMR status and KRAS , NRAS and BRAF status. The HER2 IHC status in the primary tumor and corresponding liver metastases were expressed as percentages with 95% confidence interval (CI) and concordance was assessed using the Cohen’s kappa coefficient. The statistical analysis was performed with R software v.4.0.2. The project was approved by the scientific board of the Regional Biobank of Franche-Comté, France (BB-0033–00024) ensuring patients’ informed consent. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki (6th revision, 2008). Clinicopathological characteristics Tumor tissue samples from 108 patients who had colorectal and liver resection were collected (Fig. ). The relevant clinicopathological characteristics of the patients are summarized in Table . Seventy-six (70%) patients had synchronous liver metastases and 32 (30%) metachronous metastases. HER2 Status The number of primary CRC with IHC scores of 0, 1 + and 2 + were 89 (82.4%), 17 (15.8%), and 2 (1.8%), respectively. The number of corresponding liver metastases with IHC scores of 0, 1 + and 2 + were 99 (91.7%), 7 (6.5%), and 2 (1.8%), respectively. None of the CRC was scored 3 + (Table ). A complete concordance between HER2 TMA and HER2 WS was observed in the 10 randomly selected patients with HER2 score 0. FISH detected HER2 amplification in only one case (1/108; 0.9%) among the IHC 2+ samples, both present in the primary CRC and the corresponding liver metastasis (Fig ). This case corresponded to a 45 years old female patient having a low-grade NOS adenocarcinoma of the left side, associated with perforation and a synchronous liver metastasis, but without lymph node invasion. The patient was initially treated by surgery and adjuvant chemotherapy and progressed 3 years later with a pulmonary metastasis. Concordance of HER2 status between primary tumor and liver metastasis The overall concordance between primary CRC and their paired liver metastasis was 80.5% (Table ). Out of 108 cases, 84 (77%), 2 (1.8%) and 1(0.92%) were respectively scored 0, 1 + , 2 + on both primary CRC and corresponding liver metastasis. For 21 patients (19%), the HER2 status of primary CRC was different from that on the liver metastasis. Five patients (4.6%) were scored 0 on primary CRC and 1 + on the liver metastasis (Fig. ). Conversely, 14 patients (12%) showed 1 + staining on primary CRC and 0 on the liver metastasis (Fig. ). One patient (0.92%) showed 1 + staining on primary CRC and 2 + on the liver metastasis and one patient (0.92%) showed 2 + staining on primary CRC and 0 on the liver metastasis. The Cohen’s kappa coefficient was 0.17 corresponding to a very low concordance. In patients with concordant status, 28 (32.2%) had metachronous and 59 (67.8%) synchronous metastases. Among the 21 patients who presented a discrepancy in the HER2 status between the primary CRC and the metastasis, four (19.1%) had metachronous metastasis and 17 (80.9%) had synchronous metastasis. The characteristics of these patients with discordant HER2 status are summarized in the supplementary Table 1. A chi-square test was performed and showed no significant difference between metachronous and synchronous metastases regarding the HER2 status ( p = 0.237). HER2 status in multiples liver metastases HER2 status was analyzed for 24 patients with multiple liver metastases. The number of metastases per patient varied from 2 to 13 lesions. Overall 8 (33.3%) were scored 1 + and 16 (66.7%) were scored 0. None of the metastases was scored 2 + or 3 + . For 5 out of 24 patients, liver metastases showed a different score, leading to a discrepancy reaching 21%. It concerned 2 patients with metachronous metastases and 3 patients with synchronous metastases (Fig. ). Tumor tissue samples from 108 patients who had colorectal and liver resection were collected (Fig. ). The relevant clinicopathological characteristics of the patients are summarized in Table . Seventy-six (70%) patients had synchronous liver metastases and 32 (30%) metachronous metastases. The number of primary CRC with IHC scores of 0, 1 + and 2 + were 89 (82.4%), 17 (15.8%), and 2 (1.8%), respectively. The number of corresponding liver metastases with IHC scores of 0, 1 + and 2 + were 99 (91.7%), 7 (6.5%), and 2 (1.8%), respectively. None of the CRC was scored 3 + (Table ). A complete concordance between HER2 TMA and HER2 WS was observed in the 10 randomly selected patients with HER2 score 0. FISH detected HER2 amplification in only one case (1/108; 0.9%) among the IHC 2+ samples, both present in the primary CRC and the corresponding liver metastasis (Fig ). This case corresponded to a 45 years old female patient having a low-grade NOS adenocarcinoma of the left side, associated with perforation and a synchronous liver metastasis, but without lymph node invasion. The patient was initially treated by surgery and adjuvant chemotherapy and progressed 3 years later with a pulmonary metastasis. The overall concordance between primary CRC and their paired liver metastasis was 80.5% (Table ). Out of 108 cases, 84 (77%), 2 (1.8%) and 1(0.92%) were respectively scored 0, 1 + , 2 + on both primary CRC and corresponding liver metastasis. For 21 patients (19%), the HER2 status of primary CRC was different from that on the liver metastasis. Five patients (4.6%) were scored 0 on primary CRC and 1 + on the liver metastasis (Fig. ). Conversely, 14 patients (12%) showed 1 + staining on primary CRC and 0 on the liver metastasis (Fig. ). One patient (0.92%) showed 1 + staining on primary CRC and 2 + on the liver metastasis and one patient (0.92%) showed 2 + staining on primary CRC and 0 on the liver metastasis. The Cohen’s kappa coefficient was 0.17 corresponding to a very low concordance. In patients with concordant status, 28 (32.2%) had metachronous and 59 (67.8%) synchronous metastases. Among the 21 patients who presented a discrepancy in the HER2 status between the primary CRC and the metastasis, four (19.1%) had metachronous metastasis and 17 (80.9%) had synchronous metastasis. The characteristics of these patients with discordant HER2 status are summarized in the supplementary Table 1. A chi-square test was performed and showed no significant difference between metachronous and synchronous metastases regarding the HER2 status ( p = 0.237). HER2 status was analyzed for 24 patients with multiple liver metastases. The number of metastases per patient varied from 2 to 13 lesions. Overall 8 (33.3%) were scored 1 + and 16 (66.7%) were scored 0. None of the metastases was scored 2 + or 3 + . For 5 out of 24 patients, liver metastases showed a different score, leading to a discrepancy reaching 21%. It concerned 2 patients with metachronous metastases and 3 patients with synchronous metastases (Fig. ). The aim of this study was to analyze the concordance of HER2 status between primary CRC and their corresponding liver metastases. Indeed, the precise evaluation of this biomarker is mandatory, as the expansion of new treatments targeting HER2 in this location has recently led to promising results, mainly in RAS wild-type tumors . In our series, based on 108 patients and 285 samples, we found a significant discrepancy between primary CRC and its paired metastases reaching 19.5%. This rate reached 21% between the multiple liver metastases resected in each patient. This discrepancy concerned the 0, 1 + and 2 + IHC categories, as only one case of 2 + IHC HER2 amplified CRC was observed, with the same status on primary and metastatic sites. This low frequency of HER2 amplified CRC is in accordance with the literature’s data, reporting rates between 2 and 5% . Few studies have compared the HER2 status of primary CRC and its corresponding metastases . Moreover, they did not use the latest recommended scoring system, as compared to our work, based on the Valtorta criteria . In addition, they did not analyze multiple synchronous or metachronous metastases originating from the same patient . Lee et al. reported a discrepancy rate of 14.6% between primary CRC and liver metastasis. However, the interpretation of IHC staining was based on the criteria defined for gastric cancer . In the study by Chen et al . discrepancy was also frequently observed in paired tumor samples encompassing primary CRC and brain metastases . According to the study of Shan et al., a discrepancy in liver metastases compared to primary CRC was present in 27.3% of cases . Recently, Hashimoto et al. found a discordance rate of 7% for HER2 amplified tumors and 19% for HER2 low tumors between primary CRC and metastases . Additionally, we observed a discrepancy rate reaching 21% among the multiple liver metastases resected in a given patient. This rate was similar in synchronous and metachronous liver metastases. Thus, our work highlights the temporal and spatial heterogeneity of HER2 status that can be observed in CRC. Our study took in consideration the “HER2 low status”, which includes 1 + and 2 + non-amplified cases, associated with a discrepancy rate reaching almost 19.5% between the primary CRC and its paired metastasis. This low level of HER2 expression represents an opportunity to offer a new approach with antibody–drug conjugate (ADC) such as trastuzumab deruxtecan (T-DXd) . This therapeutic mechanism is supported by the ADC linking to HER2 protein found on malignant cells, even with low level of expression. After internalization and cleavage, DXd causes targeted DNA damage and apoptosis in cancer cells. Thus, it is a different pathway from the targeting of HER2 2 + amplified / HER2 3 + tumors, whose aim is to neutralize the oncogenic addiction provided by HER2 overexpression. This therapeutic approach of HER2 low tumors has been successfully validated in breast cancer, is promising in gastric cancer, but has not yet demonstrated positive effects in CRC. However, in this setting, only one study is available and clinical trials regarding this approach are still ongoing . Therefore, this particular immunohistochemical pattern has still to be considered. Theranostic biomarker heterogeneity remains a challenge in the management of solid tumors, potentially leading to under- or overtreatment. In this setting, many studies have been performed leading to different results according to the tumor type and the biomarker analyzed. Regarding the MMR status in CRC, the recent available studies demonstrated a high concordance rate between primary CRC and their metastases . However, debate surrounds the RAS and BRAF status in primary CRC and corresponding metastases. While a review regarding multiple CRC biomarkers, including RAS and BRAF status , showed a strong agreement between the primary CRC and its metastatic site(s) , therapeutic pressure induced by chemotherapy and/or targeted treatment may alter the status post-treatment. The CRICKET study highlights how tumors initially RAS wild-type may become resistant to anti-EGFR therapy through the emergence of RAS mutated clones, and then recover a RAS wild-type status after stopping the targeted treatment . These data illustrate dynamic tumor heterogeneity under treatment pressure. Taken together, these data support the use of an approach that provides a more accurate assessment of the HER2 status and overcomes heterogeneity. In this setting, liquid biopsy relying on circulating tumor DNA (ctDNA), may offer a better way to characterize HER2 status in patients with metastatic CRC. Some clinical trials, such as the TRIUMPH study, have reported a very good concordance between liquid and tissue-based approaches . However, this biomarker analysis was mainly designed to select HER2 amplified / 3 + tumors associated with a high level of DNA copy number, rather than to screen HER2 low tumors. As this assay is designed to detect DNA alterations, such as amplification in the blood, and not the absence or low level of protein expression represented by 0, and HER2 low CRC, which include 1 + and 2 + non amplified cases, the evaluation of HER2 by IHC remains relevant. In conclusion, our study highlights the temporal and spatial heterogeneity of HER2 status between the primary colorectal tumor and synchronous or metachronous liver metastases . Our data underline a difference between HER2 low CRC, which can be taken into account in this era of precision medicine and innovative therapeutic options, and raise the question of testing different tumor sites for HER2 status. Supplementary Material 1. |
A practical guide to public involvement with children and young people in dental research | 33f2d3a0-a0a6-4c13-b916-a5a981d11d77 | 11761068 | Dentistry[mh] | Public involvement (PI) in health research is an umbrella term which describes the process by which research is undertaken ‘with' or ‘by' people rather than ‘to', ‘about' or ‘for' them. The terms patient and public involvement (PPI) and PI are often used interchangeably but have subtle differences in their definition. PPI provides separate definitions of patients and the public; patients are seen as current or former users of health and social care services, with the public seen as anybody else, such as potential users of healthcare services. PI encompasses both, including current, former or potential patients and those who represent patients, carers and family members. Public involvement differs to public engagement. Engagement focuses on the dissemination of research information and knowledge to the public, for example raising awareness of research or disseminating research findings. PI involves a partnership between the researchers and the public, empowering the public to influence decision-making at all stages of the research process. This may include a range of activities, including prioritising research themes, working as part of a project advisory group, informing the development of research materials, or carrying out user-led research. Regardless of the activity or stage, PI should be meaningful, empowering the public to inform research development, and not simply a tick-box exercise. It is also important to consider that children and young people (CYP) may also be current, former or potential service users, carers or family members and should be involved in PI. Health research should have the overarching aim of meeting the needs of the public, including where those groups are CYP. To meet this aim, it is important to work with those who have relevant lived experience or knowledge, including CYP, facilitating their voice to produce research which is relevant to their needs. In recent years, the expectation from funders for PI to be part of the research has increased, with many funders stipulating that applicants demonstrate how the public will actively be involved in the design and delivery of their research. , In 2022, several funding bodies, including UK Research and Innovation and the National Institute for Health and Social Care Research (NIHR) signed a shared commitment to improve public involvement in research, stating that ‘public involvement is important, expected and possible in all types of health and social care research'. In addition to this, there has been a drive for PI in wider fields, such as NHS service delivery, clinical guideline development and the UK parliamentary system. , , PI encompasses all activities which aim to include the public, including CYP, in the research process; however, the extent to which people are involved in the research process may differ. Involvement with CYP may take place at different phases of the research process and at different levels. highlights an adapted participation matrix, originally developed by Shier, which describes three levels of involvement - consultation, collaboration and user-led - across the different phases of the research process. , Consultation describes a one-off involvement process, where CYP provide opinions on certain aspects of a proposal to inform the research but are not actively involved in decision-making on an ongoing basis. , Collaboration describes ongoing involvement with CYP, where they are actively involved in the research process. , In this case, CYP work alongside researchers providing input into areas such as research design and/or data collection or analysis and/or dissemination. User-led describes a research process which is led by CYP, rather than the researcher. , With support from researchers, CYP design and deliver the research project. This may be the sole study, or there may be PI-led elements within a larger research study. To undertake high-quality research with CYP, it is important that they are involved in PI and also as participants in the research. It is important to include CYP as active participants in research where they are allowed to provide their experiences and opinions, rather than using proxies, such as parents. In a systematic review published in 2015, only 17.4% of dental research was undertaken with CYP where CYP were participants in the study. Additionally, 18.1% of studies used proxies for CYP and 64.2% undertook research on children, where they were subjects and not involved in the research. While this is an improvement from 2007, where only 7.3% of dental research was with CYP, there is still a need for significant improvement in the extent to which CYP are involved in research.
The United Nations Convention of the Rights of the Child (UNCRC) provides CYP with a comprehensive set of human rights. Article 12 of the UNCRC states that ‘every child has the right to express their views, feelings, and wishes in all matters affecting them, and to have their views considered and taken seriously'. CYP should have the opportunity to contribute directly and this input can have many benefits to both CYP and the research. CYP can be involved from the start, aiding in identification and prioritisation of research questions. Through the design, CYP can inform recruitment strategies, techniques for data collection and dissemination of results. This can have great benefits for research, including wider involvement of CYP and improved recruitment and retention. Involvement in research supports CYP to develop a wide range of research skills, such as writing and public speaking. This has been associated with a self-perceived improvement in confidence, self-esteem and employment opportunities. CYP report positive experiences of involvement in research, such as feeling part of a team, feeling listened too, empowerment and a greater understanding of their rights.
Early involvement of CYP may identify a research question relevant to your local community or context, which may not otherwise have been identified. While advocating for early involvement in the research design process, we note this may not be possible, for example, where funding is provided from a pre-defined research question. Despite this, engagement with CYP as early as possible is beneficial for the research and CYP. It is important to consider the level of involvement you anticipate using: consultation, collaboration, or user-led. There are many factors which may influence this decision, such as time availability, funding availability, type of research being undertaken and previous PI experience. Time and funding availability are some of the biggest limiting factors when considering public involvement, which can impact a researcher's ability to undertake meaningful PI. Where possible, appropriate time and funding should be incorporated into research design to facilitate ongoing PI. Where this is not possible, a pragmatic approach is needed to consider how CYP can be involved in the research. In these cases, consultation approaches are often used to gain feedback from CYP; however, it vital that this is appropriately planned and the feedback actioned to avoid this moving to a tick-box approach to PI. The nature of research can influence the type of PI planned and, in some studies, it may not be appropriate to have CYP involved in all aspects of the research. However, in such studies, CYP can have a vital role in developing techniques for disseminating research. Undertaking PI for the first time can be a daunting but it doesn't mean that you can't undertake meaningful PI. While the levels of PI are often described in isolation, there may be a natural development from consultation to collaboration or user-led research. We note that collaboration and user-led research can be easier once the research has developed a relationship with a community of CYP with an interest in this area. Meaningful consultation can have great benefit to research and can help foster partnerships with CYP, opening the door for further involvement where CYP have greater autonomy.
PI is a fluid and ever-evolving process and is highly dependent on the research area and the CYP involved. Considering the needs of CYP, it is almost impossible to create a one-size-fits-all approach to PI. However, there is guidance available from a range of sources: UNCRC Article 12: the right of the child to be heard UK Standards for Public Involvement in research NIHR: briefing notes for public involvement in the NHS, health and social research Top tips for involving CYP in research from CYP's point of view Royal College of Paediatrics and Child Health: engaging children and young people While ethical approval is often not needed, the underlying ethical principles should still apply to PI processes. These include areas such as informed consent, safeguarding, ensuring confidentiality, minimising risk of harm and training for researchers and PI members (where required). Ethical approval may be required, for example, for user-led research; although, this remains a point of discussion as outlined by Nollett et al . If unsure, it is important to discuss this with your local institution.
The CYP involved in PI will depend on the type of research being undertaken and the nature of the input required. It is important to be flexible in your approach in identifying those to be involved in your PI, as this may evolve as your research progresses. Firstly, consider the population you plan to be involved in the research. This may be associated with characteristics such as age, location or a certain health condition or lived experience. Secondly, consider the level of involvement you are looking for, for example, a one-off consultation or a long-term, user-led research project, as this may alter the initial approach. Once the target group has been identified, methods to advertise the involvement opportunities should be considered. It is useful to consider whether your organisation, such as an NHS trust or university, already has an established link to existing groups which can be used. These may include: Existing local/regional/national young person's advisory group (YPAG) - identify whether there is a local YPAG in your region. There may be a GenerationR YPAG near you, which is an alliance of YPAGs across the UK funded by NIHR and/or NHS organisations through various channels. Contact the co-ordinator to discuss involvement of the YPAG Existing PI groups relevant to your research theme - there may be a regional or national PI group relevant to your research area. They may be able to be involved in your work or may be able to provide input as to the best place to advertise for the CYP you are looking to involve Charities or support groups - a wide range of charity groups, support groups or patient networks exist locally, regionally, nationally and internationally. These groups may be aimed at certain populations, such as those with specific conditions or of a certain age, so it is important to identify if there is a group relevant to your research. These may be of benefit for research regarding rare diseases, as it can help identify those who may be current patients, carers or family members Relevant settings - there may be settings which may be best suited to the CYP you are looking to involve. Examples include healthcare settings, activity groups or clubs. It can be useful to contact these areas and discuss the possibility of advertising through these networks. When considering healthcare settings, there may be wider ethical considerations associated with these settings Social media - social media may be useful to disseminate this information through wider networks. The characteristics of those involved also needs to be considered, with a desire to maximise the diversity of the group. There may be groups of CYP who are less likely to be involved in research, and while there is a wide range of terminology used to describe these groups, they are often defined as under-served groups. This definition best reflects that research should better serve these groups and facilitate their involvement. Intention must be made to plan ways to actively offer these groups an opportunity to be involved in research. Researchers may consider contacting those who may have an existing relationship with these groups, who can be described as ‘gatekeepers'. These gatekeepers can be a range of people who work in different settings, such as healthcare professionals in the community, those in community groups, such as children's centres and clubs, or religious groups. Contacting such gatekeepers and explaining the rationale behind the research and the expectations of the PI will be useful. Gatekeepers may suggest adaptions to the planned PI to support CYP involvement and can suggest the best way to advertise to increase involvement. Additionally, advertisement through these gatekeepers, who are often a trusted person within the community, can facilitate rapport building and subsequent involvement, rather than advertising coming from an unknown researcher. Building trust and rapport with gatekeepers and communities takes time and this should be considered in the research planning.
Adapting the setting of PI can be useful to help people get CYP involved. Holding events in a location which is familiar to the public can be useful to aid involvement. For example, rather than inviting people to attend a meeting with you at a different location, try to hold a session in a convenient location, or attend a scheduled meeting with an existing group. This can ease the process of involvement and help reduce the burden for CYP. There are many methods you can use to involve CYP in research. Common examples include questionnaires, interview or focus group discussions, and interactive workshops. The methods used are flexible depending on the CYP involved but they should be encouraging and easy for CYP to give their opinions. Reasonable adjustments should be made to facilitate involvement of CYP who may need additional support for communication. While the input is coming from CYP, some CYP may prefer to have their parent or guardian present for support, while others may not. Discuss this with the CYP involved and make adaptions so that all CYP are comfortable. If parents are present, ensure they have information regarding their role in support to ensure that the CYP's voice leads the discussion. There may need to be several events where CYP of a similar age range are together so that discussions can be pitched at the appropriate level of understanding. When involving CYP, timing is also of particular importance. CYP often have busy lives, including extra-curricular activities, crucial timings, such as GCSE and A-level exams, and other personal responsibilities. It is important to be flexible, offering after-school times, weekends or school holidays, depending on their preference. Virtual events may be more convenient and can be useful for PI which is required over a large geographical area. Face-to-face events can allow CYP to discuss with each other, but it is important to hold these in a place convenient to the CYP. It is important to cost sufficient funding for PI. Guidance for remuneration is available from NIHR. Consider the length of time and level of commitment required for the involvement and be transparent with CYP regarding the commitment. Remuneration should be reflective of the level of commitment and any associated costs, such as travel expenses. Shopping vouchers are a popular method of remuneration for CYP. Direct monetary payment can be considered, such as for travel expenses; although, this can have complexities and local guidance should be followed. Some CYP will want to be involved in research and prefer not to be remunerated for this and it is important to respect these wishes.
Evaluation is a key component of PI. It is important to consider the impact involvement has on both the public and the research. Firstly, it is important that CYP are provided with feedback regarding the input they have provided and how this influenced the research. It is important that this feedback is transparent and timely. Failure to do so can leave those involved dissatisfied and with feelings of it being a tick-box exercise. There are many ways that impacts of PI can be shared, such as a newsletter or a website. It is beneficial to discuss the preferred ways of receiving updates with those involved to ensure it is timely and relevant to their needs. The research team should gain feedback from those involved regarding their experience of the PI process and how it met their expectations. Key areas to evaluate include setting, timing, activities, feeling heard, meeting expectations and areas for development. This can take many formats but questionnaires or open discussions with those involved are commonly used. Reflections from the researcher using a diary can also be helpful to note key discussions or developments from PI.
Sharing learning about CYP involvement in practice is aided through systematic evaluation and reporting of what works best for CYP and describes what impact their involvement has on the actual research and on those who get involved. To aid the reporting process, the Guidance for Reporting Involvement of Patients and the Public (GRIPP2) checklist has been developed to improve the quality and consistency of reporting. An example GRIPP2 short form can be seen in , outlining the key areas for reporting. In addition to reporting PI within scientific publications, it is important to consider dissemination of the overall research outputs and PI contributions to the public, relating public involvement to public engagement. There are many methods of public engagement, such as open days or community events, social media and websites, and this will likely vary depending on the nature of the research. This can be beneficial to both demonstrate the impact of PI and research and may encourage others to be involved in research in the future. This forms part of public engagement, which focuses on the dissemination of research to the public.
As demonstrated, there is great scope for PI in dental research, with benefits for the researcher, CYP and the research output. At all levels of PI, there is opportunity for meaningful relationships to be built with CYP to create research which is both achievable and relevant to their needs and desires. While PI can have great benefits to dental research, it is important to acknowledge challenges that research teams face during the PI process. Limitations in time, funding or ability to engage sufficient numbers of CYP are often reported by researchers. However, this does not mean that the PI produced will not be meaningful. Proactive engagement of CYP through a range of methods, transparent reporting and research reflection are key in preventing PI becoming a tick-box exercise. Using these principles, there is opportunity for involvement of CYP in a range of settings and the authors actively encourage readers to involve CYP in decision-making for what is, after all, their research.
|
Coupling Drug Dissolution with BCS | 2311ed21-7706-4767-afa7-47e1e0f96504 | 11636680 | Pharmacology[mh] | In a recent article we introduced the concept of Finite Dissolution Time (F.D.T.) as an intuitive extrapolation of Finite Absorption Time (F.A.T.) concept . This is a plausible extrapolation since drug dissolution takes place under in viv o conditions for a finite time regardless of the complete or incomplete dissolution of the dose administered. The finite character of both terms, F.A.T.and F.D.T. is physiologically sound since drug absorption does not take place beyond the absorptive sites while drug dissolution is also not important beyond the absoprtive sites. Accoringly, the inception of the F.A.T. and F.D.T. concepts are linked with the physiological constraints of the dissolution and absorption processes under in vivo conditions. However, it is not uncommon to see in vitro dissolution profiles reaching a plateau value of 100% of dose dissolved at finite time . In this particular case, the term F.D.T. denotes the time needed for the complete dissoution of the drug dose. Besides, the F.D.T. is, in essence, included in the today’s regulatory biowaiver guidelines ; this is so since the rapid (< 30 min) or very rapid (< 15 min) dissolution criterion for the biowaivers imply completion of the dissolution process in finite time. Intuitively, the F.D.T estimate, τ d , considered under in vivo conditions is equal or shorter than the F.A.T. estimate τ , namely, τ d ≤ τ . For Class II drugs, τ d = τ , for Class I and III drugs τ d < τ while for Class IV drugs both relationships i.e., τ d = τ and τ d < τ are possible . We should note that a Class II drug with basic properties can be completely dissolved in the stomach, not precipaitate and be absorbed in the intestine and essentially bahave like BCS Class I drug. So far, the mean dissolution behavior of solid drug particles has been quantified with the mean dissolution time (M.D.T.) and the mean dissolution time for saturation (M.D.T.s.) for drugs whose dose is completely or not completely dissolved at the end of the dissolution process, respectively . Both terms correspond to a stochastic interpretation of the dissolution process since the profile of the accumualtaed fraction of drug amount dissolved from a solid dosage form gives the probability of the residence times of the drug molecules in the dissolution medium. The fraction of drug dissolved is always a distribution function, and therefore it can be characterized by its first (statistical) moment, which is the M.D.T. The latter term holds only when the entire available drug dose is dissolved completely. When drug particles remain undissolved at the end of the dissolution process, the M.D.T. is not defined since it is equal to infinity. In this case, the Mean Dissolution Time for saturation (M.D.T.s.) is coined and refers only to the portion of the dose that is actually dissolved . Unfortunately, the clear distinction between (M.D.T.) and (M.D.T.s.) has neither been recognized nor adopted in the literature so far. In this work, we show that the three parameters, (F.D.T.), (M.D.T.) and (M.D.T.s.), lie in the heart of the biopharmaceutic classification system (BCS) ; this allowed us to couple dissolution time considerations with BCS. A dissolution-based temporal version of BCS, the so-called T- BCS was developed. The temporal classification of Class I and III drugs whose dose is completely dissolved in the dissolution medium are based on the M.D.T. values, while Class II and IV drugs whose dose is not completely dissolved in the dissolution medium are classified according to their M.D.T.s. values using the time axis (M.D.T.) −1 or (M.D.T.s.) −1 , respectively. In addition, drugs/formulations which exhibit finite dissolution time (F.D.T.) for complete dissolution of the drug dose can be also classified in Class I or III using the (F.D.T.) −1 axis. The dimensionless dose/solubility ratio, q, normalized in terms of the volume (900 mL) of dissolution medium, Eq. . 1 [12pt]{minimal} $$q=_{s} V}$$ q = Dose C s V Graphical estimation of M.D.T. or M.D.T.s. The mean dissolution time (M.D.T.) corresponds to the first moment that can be determined from the experimental dissolution data using the following equation : 2 [12pt]{minimal} $$M.D.T.=_{0}^{{W}_{ }}t*dW(t)}{{ }_{0}^{{W}_{ }}dW(t)}$$ M . D . T . = ∫ 0 ∞ t ∗ d W ( t ) ∫ 0 ∞ d W ( t ) where W ( t ) is the cumulative amount of drug dissolved at time t . Estimates for M.D.T. or M.D.T.s. can be obtained graphically by calculating the area (ABC) between the fraction of dose dissolved ( Φ ) – time curve and the plateau level, Fig. . When the plateau level is equal to one ( Φ ∞ = 1) an estimate for M.D.T. can be derived from Eq. , Fig. a. Similarly, an estimate for M.D.T.s. can be derived from Eq. when the plateau level is Φ ∞ < 1, Fig. b. 3 [12pt]{minimal} $$M.D.T.\;or\;M.D.T.s.=_{ }}$$ M . D . T . o r M . D . T . s . = ABC Φ ∞ The Noyes-Whitney Equation Model Since the very first experiment in 1897, dissolution is mathematically described by the Noyes-Whitney equation ; the integrated form of the dissolved drug concentration C, as a function of time t indicates that the dissolution profile is exponentially reaching the plateau value, the saturation solubility C s at infinite time, Eq. 4 [12pt]{minimal} $$C={C}_{{}}[1-{e}^{-kt}]$$ C = C s [ 1 - e - k t ] where k is the dissolution rate constant. This equation can be expressed as a function of the fraction of dose dissolved Φ when q ≥ 1 as follows, 5 [12pt]{minimal} $$ =[1-{e}^{-kt}]$$ Φ = 1 q [ 1 - e - k t ] which means that only a portion of the dose is dissolved, and the drug reaches the saturation level 1/q . In this case, the corresponding M.D.T.s. is equal to 1/k . On the contrary, when q < 1 , which means that the entire dose is eventually dissolved, the dissolution follows the usual exponential form only until it reaches the value Φ = 1, i.e., 100% of the drug is dissolved, in a finite dissolution time , τ d and thereafter remains constant , 6 [12pt]{minimal} $$ =\{[1-{e}^{-kt}],& {}\;t<{ }_{{}}\\ 1,& {}\;t { }_{{}}.$$ Φ = 1 q 1 - e - k t , for t < τ d 1 , for t ≥ τ d where: 7 [12pt]{minimal} $${ }_{{}}=-}(1-q)}{k}$$ τ d = - ln ( 1 - q ) k In this case ( q < 1), the M.D.T. is as follows , 8 [12pt]{minimal} $$M.D.T.=}(1-q)}{kq}$$ M . D . T . = q - q - 1 - ln ( 1 - q ) kq which for q = 1, i.e., when the dose is equal to the drug amount required to saturate the dissolution medium, collapses to M.D.T. = 1/ k . The Weibull Function Model The Noyes-Whitney equation is distinguished by the assertion that a constant, denoted as the dissolution rate constant k , governs the dissolution rate throughout the process. This foundational premise has faced scrutiny in the literature, leading to the emergence of models featuring time-dependent rate coefficients, which are considered to have greater physical relevance with the time-dependent phenomena that occur as dissolution progresses . In this vein, similar analysis has been published for the Weibull function, which is used extensively for the kinetic description of drug dissolution and release data . Therefore, by replacing the dissolution rate constant, k , with a time-dependent coefficient, namely, [12pt]{minimal} $$k={k}_{1}{t}^{-h}$$ k = k 1 t - h in the differential Noyes-Whitney equation expressed in terms of Φ , we end up with : 9 [12pt]{minimal} $$={k}_{1}{t}^{-h}(- )$$ d Φ dt = k 1 t - h 1 q - Φ where k 1 is a constant with time h −1 units and h is a dimensionless constant. Solving Eq. and replacing [12pt]{minimal} $$a=_{1}}{1-{}}\;and\;b=1-h$$ a = k 1 1 - h a n d b = 1 - h , we get a function of the fraction of dose dissolved Φ when q ≥ 1: 10 [12pt]{minimal} $$ =(1-{e}^{-a{t}^{b}})$$ Φ = 1 q 1 - e - a t b which also means that only a portion of the dose is dissolved, and the drug reaches the saturation level 1/q . The corresponding M.D.T.s. is equal to: 11 [12pt]{minimal} $${M.D.T.s.=a}^{- } (-1)$$ M . D . T . s . = a - 1 b Γ 1 β - 1 Where Γ(·) is the complete and Γ(·,·) is the incomplete gamma function. When q < 1, the solution takes a branched form as follows : 12 [12pt]{minimal} $$ =\{(1-{e}^{-a{t}^{b}}),& {}\;t<{ }_{{}}\\ 1,& {}\;t { }_{{}}.$$ Φ = 1 q 1 - e - a t b , for t < τ d 1 , for t ≥ τ d where: 13 [12pt]{minimal} $${ }_{{}}={(-}(1-q)}{a})}^{()}$$ τ d = - ln 1 - q a 1 b In this case ( q < 1), where 100% of the initial dose is dissolved, the M.D.T. is given by : 14 [12pt]{minimal} $$M.D.T.=^{{~}^{1}\!/ \!{~}_{b}.}}[b(q-1)*(-{}(1-q){)}^{{~}^{1}\!/ \!{~}_{b}.})- (,-{}(1-q))+ ()]$$ M . D . T . = 1 b q a 1 b b q - 1 ∗ - ln 1 - q ) 1 b - Γ 1 b , - ln 1 - q + Γ 1 b which for q = 1 turns into: 15 [12pt]{minimal} $${}.{}.{}. = {a}^{- } (-1)$$ M . D . T . = a - 1 b Γ 1 b - 1 All parameters, τ d , M.D.T. and M.D.T.s., for the two cases with q < 1 and q ≥ 1, respectively, derived for the Noyes-Whitney equation and the Weibull function are listed in Table . The Reaction-Limited Dissolution Model The reaction limited model , which relies on a bidirectional chemical reaction involving the undissolved drug species, the freely available solvent molecules, and the resulting drug-solvent complex was also used for the computational work. It's important to note that this study's foundation relies upon two earlier studies, i.e., conducted by Dokoumetzidis and Macheras in 1997 and by Lansky and Weiss in 1999 . The fundamental differential equation expression describing the rate of the dissolution process, is as follows : 16 [12pt]{minimal} $$={k}_{1}^{*}{(-C)}^{ }-{k}_{-1}C$$ dC dt = k 1 ∗ D V - C λ - k - 1 C where k 1 * = k 1 ' [molecular weight] (1– λ) (k 1 ' = k 1 [ w 0 ] b , where [ w 0 ] is the initial concentration of the free species), D is the initial quantity (dose) in mass units and λ is a dimensionless constant. Equations – provide the mathematical foundation for understanding drug dissolution under various conditions, encompassing scenarios with both homogeneous ( λ = 1) (Eq. ) and solvent-abundant ( λ ≠ 1) conditions (Eq. – ) : 17 [12pt]{minimal} $$ =_{ss}}(1-{e}^{(-({k}_{1}^{*}+{k}_{-1})t)})$$ Φ = 1 q ss 1 - e - k 1 ∗ + k - 1 t 18 [12pt]{minimal} $$={k}_{1}^{*}{(-C)}^{ }$$ dC dt = k 1 ∗ D V - C λ 19 [12pt]{minimal} $$C=-{[{()}^{1- }-(1- ){k}_{1}^{*}t]}^{1/(1- )}$$ C = D V - D V 1 - λ - 1 - λ k 1 ∗ t 1 / 1 - λ Equation has the form of a power-law and can be fitted to experimental dissolution data. Unlike the Noyes-Whitney and Weibull models, a formula for the M.D.T. and M.D.T.s. can not be derived, and can only be computed through numerical methods for both λ = 1 and λ ≠ 1 cases . Consequently, a numerical calculation for the M.D.T. and M.D.T.s., employing Eq. at its basis, was the sole method used for estimation of these parameters. The mean dissolution time (M.D.T.) corresponds to the first moment that can be determined from the experimental dissolution data using the following equation : 2 [12pt]{minimal} $$M.D.T.=_{0}^{{W}_{ }}t*dW(t)}{{ }_{0}^{{W}_{ }}dW(t)}$$ M . D . T . = ∫ 0 ∞ t ∗ d W ( t ) ∫ 0 ∞ d W ( t ) where W ( t ) is the cumulative amount of drug dissolved at time t . Estimates for M.D.T. or M.D.T.s. can be obtained graphically by calculating the area (ABC) between the fraction of dose dissolved ( Φ ) – time curve and the plateau level, Fig. . When the plateau level is equal to one ( Φ ∞ = 1) an estimate for M.D.T. can be derived from Eq. , Fig. a. Similarly, an estimate for M.D.T.s. can be derived from Eq. when the plateau level is Φ ∞ < 1, Fig. b. 3 [12pt]{minimal} $$M.D.T.\;or\;M.D.T.s.=_{ }}$$ M . D . T . o r M . D . T . s . = ABC Φ ∞ Since the very first experiment in 1897, dissolution is mathematically described by the Noyes-Whitney equation ; the integrated form of the dissolved drug concentration C, as a function of time t indicates that the dissolution profile is exponentially reaching the plateau value, the saturation solubility C s at infinite time, Eq. 4 [12pt]{minimal} $$C={C}_{{}}[1-{e}^{-kt}]$$ C = C s [ 1 - e - k t ] where k is the dissolution rate constant. This equation can be expressed as a function of the fraction of dose dissolved Φ when q ≥ 1 as follows, 5 [12pt]{minimal} $$ =[1-{e}^{-kt}]$$ Φ = 1 q [ 1 - e - k t ] which means that only a portion of the dose is dissolved, and the drug reaches the saturation level 1/q . In this case, the corresponding M.D.T.s. is equal to 1/k . On the contrary, when q < 1 , which means that the entire dose is eventually dissolved, the dissolution follows the usual exponential form only until it reaches the value Φ = 1, i.e., 100% of the drug is dissolved, in a finite dissolution time , τ d and thereafter remains constant , 6 [12pt]{minimal} $$ =\{[1-{e}^{-kt}],& {}\;t<{ }_{{}}\\ 1,& {}\;t { }_{{}}.$$ Φ = 1 q 1 - e - k t , for t < τ d 1 , for t ≥ τ d where: 7 [12pt]{minimal} $${ }_{{}}=-}(1-q)}{k}$$ τ d = - ln ( 1 - q ) k In this case ( q < 1), the M.D.T. is as follows , 8 [12pt]{minimal} $$M.D.T.=}(1-q)}{kq}$$ M . D . T . = q - q - 1 - ln ( 1 - q ) kq which for q = 1, i.e., when the dose is equal to the drug amount required to saturate the dissolution medium, collapses to M.D.T. = 1/ k . The Noyes-Whitney equation is distinguished by the assertion that a constant, denoted as the dissolution rate constant k , governs the dissolution rate throughout the process. This foundational premise has faced scrutiny in the literature, leading to the emergence of models featuring time-dependent rate coefficients, which are considered to have greater physical relevance with the time-dependent phenomena that occur as dissolution progresses . In this vein, similar analysis has been published for the Weibull function, which is used extensively for the kinetic description of drug dissolution and release data . Therefore, by replacing the dissolution rate constant, k , with a time-dependent coefficient, namely, [12pt]{minimal} $$k={k}_{1}{t}^{-h}$$ k = k 1 t - h in the differential Noyes-Whitney equation expressed in terms of Φ , we end up with : 9 [12pt]{minimal} $$={k}_{1}{t}^{-h}(- )$$ d Φ dt = k 1 t - h 1 q - Φ where k 1 is a constant with time h −1 units and h is a dimensionless constant. Solving Eq. and replacing [12pt]{minimal} $$a=_{1}}{1-{}}\;and\;b=1-h$$ a = k 1 1 - h a n d b = 1 - h , we get a function of the fraction of dose dissolved Φ when q ≥ 1: 10 [12pt]{minimal} $$ =(1-{e}^{-a{t}^{b}})$$ Φ = 1 q 1 - e - a t b which also means that only a portion of the dose is dissolved, and the drug reaches the saturation level 1/q . The corresponding M.D.T.s. is equal to: 11 [12pt]{minimal} $${M.D.T.s.=a}^{- } (-1)$$ M . D . T . s . = a - 1 b Γ 1 β - 1 Where Γ(·) is the complete and Γ(·,·) is the incomplete gamma function. When q < 1, the solution takes a branched form as follows : 12 [12pt]{minimal} $$ =\{(1-{e}^{-a{t}^{b}}),& {}\;t<{ }_{{}}\\ 1,& {}\;t { }_{{}}.$$ Φ = 1 q 1 - e - a t b , for t < τ d 1 , for t ≥ τ d where: 13 [12pt]{minimal} $${ }_{{}}={(-}(1-q)}{a})}^{()}$$ τ d = - ln 1 - q a 1 b In this case ( q < 1), where 100% of the initial dose is dissolved, the M.D.T. is given by : 14 [12pt]{minimal} $$M.D.T.=^{{~}^{1}\!/ \!{~}_{b}.}}[b(q-1)*(-{}(1-q){)}^{{~}^{1}\!/ \!{~}_{b}.})- (,-{}(1-q))+ ()]$$ M . D . T . = 1 b q a 1 b b q - 1 ∗ - ln 1 - q ) 1 b - Γ 1 b , - ln 1 - q + Γ 1 b which for q = 1 turns into: 15 [12pt]{minimal} $${}.{}.{}. = {a}^{- } (-1)$$ M . D . T . = a - 1 b Γ 1 b - 1 All parameters, τ d , M.D.T. and M.D.T.s., for the two cases with q < 1 and q ≥ 1, respectively, derived for the Noyes-Whitney equation and the Weibull function are listed in Table . The reaction limited model , which relies on a bidirectional chemical reaction involving the undissolved drug species, the freely available solvent molecules, and the resulting drug-solvent complex was also used for the computational work. It's important to note that this study's foundation relies upon two earlier studies, i.e., conducted by Dokoumetzidis and Macheras in 1997 and by Lansky and Weiss in 1999 . The fundamental differential equation expression describing the rate of the dissolution process, is as follows : 16 [12pt]{minimal} $$={k}_{1}^{*}{(-C)}^{ }-{k}_{-1}C$$ dC dt = k 1 ∗ D V - C λ - k - 1 C where k 1 * = k 1 ' [molecular weight] (1– λ) (k 1 ' = k 1 [ w 0 ] b , where [ w 0 ] is the initial concentration of the free species), D is the initial quantity (dose) in mass units and λ is a dimensionless constant. Equations – provide the mathematical foundation for understanding drug dissolution under various conditions, encompassing scenarios with both homogeneous ( λ = 1) (Eq. ) and solvent-abundant ( λ ≠ 1) conditions (Eq. – ) : 17 [12pt]{minimal} $$ =_{ss}}(1-{e}^{(-({k}_{1}^{*}+{k}_{-1})t)})$$ Φ = 1 q ss 1 - e - k 1 ∗ + k - 1 t 18 [12pt]{minimal} $$={k}_{1}^{*}{(-C)}^{ }$$ dC dt = k 1 ∗ D V - C λ 19 [12pt]{minimal} $$C=-{[{()}^{1- }-(1- ){k}_{1}^{*}t]}^{1/(1- )}$$ C = D V - D V 1 - λ - 1 - λ k 1 ∗ t 1 / 1 - λ Equation has the form of a power-law and can be fitted to experimental dissolution data. Unlike the Noyes-Whitney and Weibull models, a formula for the M.D.T. and M.D.T.s. can not be derived, and can only be computed through numerical methods for both λ = 1 and λ ≠ 1 cases . Consequently, a numerical calculation for the M.D.T. and M.D.T.s., employing Eq. at its basis, was the sole method used for estimation of these parameters. Dissolution profiles of biowaivers, Class I, II, III, and IV drugs were extracted from their respective literature monographs, articles and subsequently digitized to facilitate analysis. Our analytical focus encompassed three distinctive metrics: F.D.T. ( τ d ) and M.D.T. for Class I and III drugs and M.D.T.s. for Class II and IV drugs. These metrics were computed through four distinct methods: one involving graphical analysis employing the trapezoidal rule, another employing the Noyes-Whitney equation, a third utilizing the Weibull function and a fourth one utilizing the reaction-limited model of dissolution. For the graphical analysis, the computational methodology was inaugurated by a graphical approach to ascertain the F.D.T. metric. A plot depicting % dissolved against time was crafted to articulate the dissolution profile. Essential to this process was the identification of two critical time junctures: the first where % dissolved indicated a value below 100%, succeeded by the subsequent point at which % dissolved surpassed the 100% threshold. A judicious application of linear interpolation was then employed to deduce the precise temporal instant at which % dissolved attained complete dissolution, thereby characterizing the F.D.T. Similarly, for the calculation of the M.D.T. and M.D.T.s., utilizing the dissolution profiles, we assessed the area (ABC) bounded by the dissolution curve and a line parallel to the time-axis aligned with the plateau, Fig. . This area (ABC) was subsequently divided by the % dissolved magnitude corresponding to the plateau, Eq. , thus yielding the M.D.T. for Classes I and III, as well as the M.D.T.s. for Classes II and IV. Since BCS is based on the minimum solubility across the physiological pH range, for each compound the lowest solubility at its corresponding pH was utilized in the calculations. For the remaining three models, we systematically performed curve fitting procedures to analyze the experimental data. We employed the equations specified in Table for the Noyes-Witney and Weibull models and utilized Eq. for the reaction-limited model of dissolution. These curve-fitting analyses were executed within the Python programming environment, particularly employing the SciPy library. The outcome of the curve fittings provided us with parameter estimates, which were subsequently utilized to calculate the M.D.T., M.D.T.s., and F.D.T. ( τ d ). In contrast, for the reaction-limited model, explicit expressions for these parameters were unavailable, and as a result, we resorted to numerical methods for the computation of M.D.T. and M.D.T.s. Specifically, for the numerical computation for the reaction-limited model time parameters, Eq. was adapted by incorporating the D/V (dose of drug/volume of dissolution medium (900 mL) ratio of each drug to generate the corresponding W ( t ) curve, which illustrates the cumulative amount of dissolved drug over time. This curve was subsequently integrated according to Eq. to determine the M.D.T. and M.D.T.s. values, using the corresponding parameter estimates derived from the curve fittings. The drugs/drug products are listed in Table alongside with their BCS classification according to various literature references . Once the F.D.T. and M.D.T. (for Class I and III drugs) and the M.D.T.s. (for Class II and IV) values (h) were estimated graphically as described above in the Methods section (Eq. ), they were plotted with the normalized Dose/Solubility ratio, q , which was calculated for each drug product individually using Eq. . The dose that was utilized was the highest dose (mg) and the solubility C s (mg/mL) corresponding to the three pH values (1.2, 4.5 and 6.8) that all the dissolution tests were carried out. For the volume of the dissolution medium, V (mL), we employed the actual volume of the medium that was used in the dissolution tests in the literature (900 mL). It is important to note that within the context of the biopharmaceutics classification system (BCS), the specified volume is set at 250 mL, aligning with the typical volume of gastrointestinal fluids. The calculation of the M.D.T. and M.D.T.s. value was feasible only for the dissolution curves that unequivocally attained a plateau. Despite our efforts to obtain M.D.T. and M.D.T.s. values for each of the pH levels across all drug products, this was not achievable in certain instances. Similarly, the estimation of the F.D.T. values for Class I and III drugs was not feasible in cases where the dissolution medium led to a plateau less than 100% dissolved. Plotting the 1/M.D.T. or 1/F.D.T. values with the normalized q for Class I and III drugs and the 1/M.D.T.s. values again with their corresponding q values, in the lines of the T-BCS frame, we obtain Fig. . In a similar vein, we plotted the M.D.T., M.D.T.s. and τ d that were obtained through the Noyes-Whitney and Weibull fittings with the corresponding normalized Dose/Solubility ratios, q , resulting in Fig. . Regarding Fig. , theoretically, one would anticipate q values for Class I and III drugs to be less than 1 because their solubility in the pH range should enable them to fully dissolve the highest dose in the given volume (900 mL). In fact, all Class I and III drugs satisfy the inequality q < 1 while most of the data points lie beyond the 2 h −1 mark (> 80% of the total Class I and III data points) (less than 30 min-rapidly dissolved limit for biowaiver status granting), Fig. . In parallel, q values for Class II and IV drugs should exceed 1 due to their solubility limitations, resulting in a saturated solution at the end of the dissolution process. However, some observations deviated from this expectation, Fig. . In this vein, enclosed data points for Class II and IV drugs with q < 1 are noted. Black circles highlight drugs (ketoprofen and piroxicam) previously classified in Classes I and II of the BCS . Red circle mark drug (amodiaquine hydrochloride) previously classified in Classes III and IV of the BCS . It should be noted that the classification in Fig. relies on a fixed volume of 900 mL and reveals that less than 50% of the Class II and IV dataset (5 out of 12, accounting for 41.67%) is positioned below the threshold of q = 1 and almost all are positioned within the range of q = 0.1 to q = 1, as evident when considering the logarithmic scale, Fig. . The exact location of the drug in the x-axis coupled with the q value quantifies the Class II or IV character within each T-BCS region, individual points are defined by their specific q and M.D.T./M.D.T.s. values. This implies that the positioning of these data points reflects the heterogeneous nature of compounds within the same class. When considering Class II and IV data points, it becomes crucial to account for two factors: the q value, which directly stems from the solubility of the compound, and the M.D.T.s. values, particularly their relative placement in relation to the borderline with the M.D.T. values. The precise coordinates of these data points serve as a quantitative measure that sheds light on the compound's behavior and the interplay between the two dissolution mechanisms. As elaborated before, it's important to note that under both in vitro and in vivo conditions, a single mechanism does not exclusively operate. The inadequacy of the reaction-limited mechanism is intricately linked to the complexity arising from the simultaneous involvement of multiple dissolution mechanisms in these scenarios. As evident from Figs. A and 3B, the data points conform to the same pattern observed in Fig. . Most of the Class I and III drugs are positioned beyond the 2 h −1 threshold, indicating that they exhibit mean dissolution and finite dissolution times of less than 30 min, which is the time limit for rapidly dissolved drugs. Similarly, an equivalent number of data points pertaining to Class II and IV drugs are situated below the q = 1 line, as previously explained in the context of Fig. . When compared, Figs. , A and 3B show a slightly different data point distribution. Noteworthy data points in Figs. and , beyond the 10 h −1 threshold and below the q = 1 threshold, belonging to either Class I/II or III/IV , exhibited release and dissolution profiles that are akin to drugs from Class I/III that reach % dissolved profiles > 85% in 30 min. In fact, similar dissolution profiles for ketoprofen piroxicam and amodiaquine hydrochloride in dissolution media 1.2 were reported . Thus, analysis of the dissolution data revealed that, these drugs exhibit dissolution profiles exceeding 86% dissolved within approximately 45 min for piroxicam, less than 30 min for ketoprofen, and less than 60 min for amodiaquine hydrochloride. This underscores the substantial influence of their dual classification (Class I/II for ketoprofen, piroxicam and III/IV for amodiaquine hydrochloride) on their dissolution profiles, particularly evident in the case of ketoprofen. Our next goal was to explore potential relationships between the various M.D.T. and M.D.T.s. values, estimated using the Noyes-Whitney equation, the Weibull function, and the reaction-limited model of dissolution with those calculated from the graphical method, which simply relies on the experimental data using Eq. . To visualize and quantify the relationships among these three sets of models, we generated a correlation plot, Fig. ; the inspection of the plots reveals that the Weibull function has the best performance since in both plots the slope of the regression lines is close to unity and the intercept is close to zero. The corresponding correlation coefficients, R 2 are 0.83 for Class I/III drugs and 0.56 for Class II/IV drugs. These values are supportive for the correlation of the variables (parameters) analyzed if one considers the diversity of data in terms of inter-and intra-Class variation (both q > 1 and q < 1 Class II/IV drugs are included) and the longer (double) time span of the Class II/IV drugs’ data in comparison with the Class I/III drugs’ data. It seems that the Weibull function captures much better the dynamics of the dissolution process across all data analyzed since the fundamental differential equation (Eq. ) describes a first-order process with a time dependent coefficient driving the dissolution rate. In fact, the analytical power of the Weibull function for the discernment of dissolution-release process in homogenous /heterogeneous media have been previously depicted in . Regarding the Noyes Whitney model for both Classes I/III and II/IV drugs the intercept is close to zero, the slopes are 0.61 and 1.10 and the correlation coefficients are 0.63 and 0.58, respectively. These results show that the Noyes Whitney model has comparable performance with the Weibull model only for Classes II/IV drugs. This can be associated with recent findings, which indicate that soluble compounds follow the diffusion limited model, while sparingly soluble drugs follow the reaction limited model of dissolution . In the same vein, for the reaction-limited model, the most noticeable observation is the lack of correlation for Class I/III drugs, with a correlation coefficient of only 0.04. Although the better performance of the Weibull model is perhaps not surprising given it provides a more "flexible" fit compared to Noyes Whitney, the poor performance of the reaction-limited model even for BCS II/IV drugs requires consideration in the light of the dissolution mechanisms operating under in vitro and in vivo conditions . After so many years of drug dissolution research, the prevailing dissolution mechanism relies on the diffusion layer model; however, there are many reports in literature which justify the reaction-limited dissolution model. For example, in a 2022 study, , Sleziona et al . discussed the particle dissolution behavior of a highly soluble and a sparingly soluble compound using a theoretical geometrical phase-field approach. They confirmed that the prevailing mechanism in the case of the highly soluble compound was indeed the diffusion layer model whereas the reaction limited, in their case surface-reaction limited, case was the prevailing model for the sparingly soluble compound. This theoretical work is related to two previously published studies . In the former study, carried out under hydronamically controlled in vitro conditions, the two mechansms seem to operate simultaneously. The latter study links the supersaturated phenomena, which are usually encountered with Class II and IV drugs with the reaction-limited model of drug dissolution. Overall, there has not been a single case where a compound follows only a single mechanism to the full extent under in vitro and in vivo conditions. It is obvious that easily dissolved drugs (like Class I and III drugs) have much shorter finite dissolution time values and more simple dissolution profiles in comparison with sparingly soluble drugs which have much longer M.D.T.s and more complex dissolution profiles (s-shaped). This means that when we attempt to correlate in vitro and in vivo results, it is much more difficult to predict Class II and Class IV behavior instead of the other two classes. Finally, it should be noted that all phenomena stated above are a function of the agitation rate which are drammatically different under in vitro and in vivo conditions. Based on all above,the poor performace of the reaction limited model of dissolution, using various drugs from different BCS classes under in vitro and in vivo conditiosn is a plausible result. The mean and median estimates and their standard deviation and range, respectively, of the M.D.T., M.D.T.s. and τ d parameters are listed in Table for Class I/III and Class II/IV drugs. A comparison of mean with the median estimates reveals their similarity except for the M.D.T.s. graphical and Noyes Whitney estimates and to a lesser degree reaction limited estimates for Class II and IV drugs. For these three sets of results the median is more appropriate as a measure of the central tendency of the data. In all other cases, the mean describes the data adequately. It should be also noted that the Weibull function had the best performance in terms of statistical performance since for all parameters studied for Class I/III and II/IV drugs the mean estimates were associated with small standard deviations and the corresponding median values were very similar. Although the sample is small (see Table ), the magnitude of the parameters roughly follows the expected ranking M.D.T. < τ d < M.D.T.s. It should be noted that the inequality M.D.T. < τ d is reasonable since M.D.T. reflects the mean behaviour of solid particles in temrs of the time scale of the dissolution process while τ d refers to the time for the completion of the disolution process of the solid drug particles. In this vein, the M.D.T. estimtes being 2–3 folds shorter than τ d should not be used as metrics for the rapid or very rapid dissolving drug classification. Nevertheless, the M.D.T. and τ d estimates are useful if contrasted with the F.A.T. estimates derived from the analysis of blood concentration time data or the percent absorbed versus time plots for the development of in vitro in vivo correlations. Although BCS is fundamentally a qualitative system used for categorizing drugs based on their solubility and permeability characteristics, the T-BCS introduces a novel dimension by complementing and expanding a previously reported quantitative biopharmaceutical classification system . This approach enables the establishment of correlations, the assessment of magnitudes of time dissolution parameters, and the comparison of different drugs, offering valuable insights into the classification of drugs within the BCS framework. |
Willingness to provide behavioral health recommendations: a cross-sectional study of entering medical students | 54d320eb-bca3-4077-8e66-576e64ab4577 | 3433382 | Preventive Medicine[mh] | Behavioral and lifestyle factors exert important influence on health, and in the US approximately half of deaths are linked to preventable behavioral and social risk factors, with smoking and poor diet and exercise as leading causes . Physicians have historically been trusted sources of information and recommendations regarding associated health risks and their mitigation. Ideally, preventive health recommendations should be clear, science-based, and artfully proffered so as to respect individual autonomy. The Guide to Community Preventive Services and the U.S. Preventive Services Task Force’s Guide to Clinical Preventive Services are useful resources for evidence-based prevention measures on various health topics. Both include an assessment of the underlying science for each prevention measure, ranging from “insufficient evidence” to “recommended.” Professional and community organizations, such as the American Cancer Society, American Academy of Pediatrics, Alcoholics Anonymous, and others also provide educational materials and preventive health recommendations, although these are not always evidence-based. In the clinical setting, physicians have the opportunity to provide prevention messages directly to patients, yet many clinical encounters do not include prevention counseling . Medical school curriculum plays an important role in training students to discuss behavioral health and lifestyle risks with their patients. Factors associated with prevention counseling and its perception as important among U.S. medical students include interest in prevention and a primary care career, female sex, non-White ethnicity, healthy personal practices, and attending a medical school that encourages healthy lifestyle for its students . However, most studies focus on perceived relevance and frequency of prevention counseling and do not explicitly address its substance beyond identifying the topic. Thus, there is little published information on whether counseling is limited to a neutral discussion of the risks and benefits of the courses open to the patient (i.e., continuing with, mitigating through reduction or other means, or eliminating an unhealthy practice) or also includes a recommendation for one of the possible courses of action. This study aimed to assess prevention knowledge among entering first-year medical students and characterize their approach to providing preventive behavioral health counseling, as indicated by their responses to brief clinical vignettes illustrating common behavioral health risk factors: smoking; alcohol consumption in a patient with indications of alcoholism; diet and exercise in an overweight, sedentary individual; and adolescent sexual activity. The study examined willingness to engage in four aspects of health communication: (1) providing information on risks associated with the health behavior, (2) recommending elimination of the risk factor as the most efficacious means for reducing risk, (3) including alternative strategies for reducing risk apart from eliminating the risk factor (i.e., harm reduction), and (4) assuring patients they would continue as their physician whether or not recommendations were accepted (i.e., respect for patient autonomy).
Student population The University of California, Davis School of Medicine, one of five medical schools in the ten-campus University of California system, is nationally ranked among the top 20 schools for primary care training and the top 50 schools for research . Students in the 2009 entering class had a mean undergraduate grade-point average of 3.57 and mean Medical College Admission Test total scores of 31.7. Comparable figures for all U.S. medical school matriculants in 2009 are 3.66 and 30.8, respectively . Survey First-year medical students entering the School of Medicine in Summer 2009 completed an anonymous self-administered paper survey addressing knowledge and attitudes relevant to public health. We administered the survey during the students’ initial welcome and orientation session, leading to a 100 % response rate. There were 19 knowledge questions representing the knowledge domains of the Clinical Prevention and Population Health Curriculum Framework developed by the Healthy People Curriculum Task Force convened by the Association of Teachers of Preventive Medicine and the Association of Academic Health Centers . To assess willingness to provide preventive health information and recommendations, the survey also included four clinical vignettes (Table ), each illustrating a common behavioral health risk factor: a 45-year-old smoker; ongoing alcohol consumption in a 38-year-old with history suggestive of alcoholism; a sedentary and overweight 23-year-old; and a 16-year-old contemplating becoming sexually active. Sex and ethnicity for the vignette patients were not provided. The survey used a five-level Likert scale to indicate willingness in each vignette to provide information or recommendations in four areas of communication: (1) provide information on associated risks, (2) recommend elimination of the behavioral risk factor as the most efficacious means for reducing risk, (3) include alternative strategies apart from risk factor elimination for lowering risk (i.e., harm reduction), and (4) assure patients of their intention to continue care whether or not recommendations are accepted. The Likert categories and their associated numeric values were “never” (0), “rarely” (1), “about half of cases” (2), “usually” (3), and “always or nearly always” (4). For each of the four areas of communication, we calculated an average willingness score based on the numeric values zero through four from the Likert scale. To maintain brevity and reduce the time demand for completing the survey, each individual questionnaire contained approximately one-third of the 19 knowledge questions and all of the four clinical vignettes. Statistical analysis Analyses were performed using the Stata IC statistical package, version 11 (College Station, TX). Population health knowledge is presented as percent of questions answered correctly. Group comparisons for knowledge scores were evaluated with the Wilcoxon rank sum test . The Spearman correlation described the association between percent of knowledge questions correctly answered and the willingness scores described above . Friedman’s rank test assessed whether willingness scores were similar across the four cases within each of the four areas of communication . Post-hoc pair-wise comparisons, adjusted for multiple comparisons, identified which case’s scores were significantly different if an overall difference across the cases was found. The University of California, Davis Institutional Review Board approved the study.
The University of California, Davis School of Medicine, one of five medical schools in the ten-campus University of California system, is nationally ranked among the top 20 schools for primary care training and the top 50 schools for research . Students in the 2009 entering class had a mean undergraduate grade-point average of 3.57 and mean Medical College Admission Test total scores of 31.7. Comparable figures for all U.S. medical school matriculants in 2009 are 3.66 and 30.8, respectively .
First-year medical students entering the School of Medicine in Summer 2009 completed an anonymous self-administered paper survey addressing knowledge and attitudes relevant to public health. We administered the survey during the students’ initial welcome and orientation session, leading to a 100 % response rate. There were 19 knowledge questions representing the knowledge domains of the Clinical Prevention and Population Health Curriculum Framework developed by the Healthy People Curriculum Task Force convened by the Association of Teachers of Preventive Medicine and the Association of Academic Health Centers . To assess willingness to provide preventive health information and recommendations, the survey also included four clinical vignettes (Table ), each illustrating a common behavioral health risk factor: a 45-year-old smoker; ongoing alcohol consumption in a 38-year-old with history suggestive of alcoholism; a sedentary and overweight 23-year-old; and a 16-year-old contemplating becoming sexually active. Sex and ethnicity for the vignette patients were not provided. The survey used a five-level Likert scale to indicate willingness in each vignette to provide information or recommendations in four areas of communication: (1) provide information on associated risks, (2) recommend elimination of the behavioral risk factor as the most efficacious means for reducing risk, (3) include alternative strategies apart from risk factor elimination for lowering risk (i.e., harm reduction), and (4) assure patients of their intention to continue care whether or not recommendations are accepted. The Likert categories and their associated numeric values were “never” (0), “rarely” (1), “about half of cases” (2), “usually” (3), and “always or nearly always” (4). For each of the four areas of communication, we calculated an average willingness score based on the numeric values zero through four from the Likert scale. To maintain brevity and reduce the time demand for completing the survey, each individual questionnaire contained approximately one-third of the 19 knowledge questions and all of the four clinical vignettes.
Analyses were performed using the Stata IC statistical package, version 11 (College Station, TX). Population health knowledge is presented as percent of questions answered correctly. Group comparisons for knowledge scores were evaluated with the Wilcoxon rank sum test . The Spearman correlation described the association between percent of knowledge questions correctly answered and the willingness scores described above . Friedman’s rank test assessed whether willingness scores were similar across the four cases within each of the four areas of communication . Post-hoc pair-wise comparisons, adjusted for multiple comparisons, identified which case’s scores were significantly different if an overall difference across the cases was found. The University of California, Davis Institutional Review Board approved the study.
Demographics All 93 members of the entering medical school class completed the survey; the median age of respondents was 25.0 y (interquartile range 23.7 – 26.7 y). Compared to the national population of entering U.S. medical students , School of Medicine students were more likely to be women (59.1 % vs. 48.3 %) and Asian or Pacific Islander (36.6 % vs. 22.7 %) or Hispanic (12.9 % vs. 7.9 %) and less likely to be White (46.2 % vs. 65.1 %) or African-American (4.3 % vs. 7.5 %). Population health knowledge Students answered correctly 71.4 % (median; interquartile range 66.7 % - 85.7 %) of the clinical prevention and population health knowledge questions. The lowest scores (<50 % correct) were for interpreting p values and for knowledge of viral influenza chemoprophylaxis. The highest scores (﹥95 % correct) were for questions related to the public health system and health communication. Men achieved higher average knowledge scores than women (median 83.3 % vs. 66.7 %, p﹤0.02, Wilcoxon rank sum test). Willingness to provide behavioral health information, recommendations, and assure continued care Students showed high and statistically similar levels of willingness to provide information on associated health risks for the behavioral risk factors illustrated in the four vignettes; similar results were seen for willingness to provide alternative strategies for lowering risk apart from risk factor elimination (i.e., harm reduction) and willingness to assure patients of their intention to continue as their physician whether or not recommendations are accepted (Table ). Of the four clinical vignettes, students showed the greatest willingness to discuss health risks, provide harm reduction information, and assure that they would continue as the patient’s physician in the case of the 16-year-old contemplating initiating sexual intercourse. Willingness to recommend risk factor elimination was highest for poor diet and lack of exercise in the overweight, sedentary individual, followed by smoking and alcohol in a patient with signs of alcoholism. Only 28 % of students were willing “always or nearly always” to recommend sexual abstinence to the 16-year-old patient, and 15.1 % indicated they would “never” recommend sexual abstinence in this situation. In contrast, no student indicated he or she would “never” recommend risk factor elimination in the case of smoking and the sedentary, overweight individual, and 2.1 % of students indicated they would “never” recommend risk factor elimination in the patient with signs of alcoholism. Willingness to recommend risk factor elimination was similar for smoking, alcohol, and diet and exercise, and statistically significantly lower for adolescent sexual activity including intercourse (Friedman’s rank test; p﹤0.001). Clinical prevention and population health knowledge score and respondent sex were not correlated with willingness to provide information on risks associated with the health behavior, to recommend risk factor elimination as the most efficacious means for reducing risk, to discuss alternative strategies for reducing risk apart from eliminating the risk factor, or to assure patients they would continue as their physician whether or not recommendations were accepted.
All 93 members of the entering medical school class completed the survey; the median age of respondents was 25.0 y (interquartile range 23.7 – 26.7 y). Compared to the national population of entering U.S. medical students , School of Medicine students were more likely to be women (59.1 % vs. 48.3 %) and Asian or Pacific Islander (36.6 % vs. 22.7 %) or Hispanic (12.9 % vs. 7.9 %) and less likely to be White (46.2 % vs. 65.1 %) or African-American (4.3 % vs. 7.5 %).
Students answered correctly 71.4 % (median; interquartile range 66.7 % - 85.7 %) of the clinical prevention and population health knowledge questions. The lowest scores (<50 % correct) were for interpreting p values and for knowledge of viral influenza chemoprophylaxis. The highest scores (﹥95 % correct) were for questions related to the public health system and health communication. Men achieved higher average knowledge scores than women (median 83.3 % vs. 66.7 %, p﹤0.02, Wilcoxon rank sum test).
Students showed high and statistically similar levels of willingness to provide information on associated health risks for the behavioral risk factors illustrated in the four vignettes; similar results were seen for willingness to provide alternative strategies for lowering risk apart from risk factor elimination (i.e., harm reduction) and willingness to assure patients of their intention to continue as their physician whether or not recommendations are accepted (Table ). Of the four clinical vignettes, students showed the greatest willingness to discuss health risks, provide harm reduction information, and assure that they would continue as the patient’s physician in the case of the 16-year-old contemplating initiating sexual intercourse. Willingness to recommend risk factor elimination was highest for poor diet and lack of exercise in the overweight, sedentary individual, followed by smoking and alcohol in a patient with signs of alcoholism. Only 28 % of students were willing “always or nearly always” to recommend sexual abstinence to the 16-year-old patient, and 15.1 % indicated they would “never” recommend sexual abstinence in this situation. In contrast, no student indicated he or she would “never” recommend risk factor elimination in the case of smoking and the sedentary, overweight individual, and 2.1 % of students indicated they would “never” recommend risk factor elimination in the patient with signs of alcoholism. Willingness to recommend risk factor elimination was similar for smoking, alcohol, and diet and exercise, and statistically significantly lower for adolescent sexual activity including intercourse (Friedman’s rank test; p﹤0.001). Clinical prevention and population health knowledge score and respondent sex were not correlated with willingness to provide information on risks associated with the health behavior, to recommend risk factor elimination as the most efficacious means for reducing risk, to discuss alternative strategies for reducing risk apart from eliminating the risk factor, or to assure patients they would continue as their physician whether or not recommendations were accepted.
Harms associated with the vignette topics illustrated here are well known, and our findings suggest that this common knowledge is reflected in the willingness of entering medical students to educate regarding these behavioral risk factors. Students also showed high levels of respect for patient autonomy, as indicated by willingness to assure patients of continued care whether or not the patient accepted the proffered health recommendations. With respect to recommending risk factor elimination, students were most willing to recommend elimination of diet and exercise risk factors in an overweight and sedentary individual, followed by smoking and continued alcohol use in a patient with indications of alcoholism. The most marked (and statistically significant) finding relates to the adolescent contemplating initiation of sexual intercourse: only 28 % of students were willing “always or nearly always” to recommend avoidance of sexual intercourse, and 15.1 % indicated they would “never” recommend abstinence in this context. Neither respondent gender nor population health knowledge score affected willingness to educate, offer preventive advice, or respect patient autonomy. Health professionals have historically relied on scientific information to craft educational messages and make recommendations. There are many examples in addition to those illustrated in this study: rather than simply providing facts about seat belts, speeding, fire safety, and other health topics involving behavioral health risk factors, health professionals have taken the additional step of making clear recommendations based on the underlying science. Accordingly, health professionals should be comfortable in providing science-based information and recommendations regarding adolescent sexual activity, as with other behavioral health risk factors. The most notable adverse consequences of adolescent sexual activity include unintended pregnancy (approximately 650,000 for U.S. women﹤20 years of age in 2006 ) and sexually transmitted diseases (e.g., 420,101 new cases of Chlamydia infection among U.S. 15–19 year-olds in 2008 ). Although not necessarily causally related, adolescent sexual activity is also associated with emotional ill health ; use of tobacco, alcohol, and illicit drugs ; and low academic achievement with negative socioeconomic consequences in later life. Because of their developmental, social, and financial state of maturity, adolescents are generally less able than independent adults to deal with the adverse consequences of sexual intercourse should they occur. Contraception, condoms, and other means can mitigate the risks for unintended pregnancy and infection, but there is no disagreement that abstinence is the most efficacious preventive measure . Thus, reluctance to proceed beyond providing information to recommending against sexual intercourse in this age group appears inconsistent with practice standards for other behavioral health risk factors and with data on associated harms. A large national survey of U.S. medical students documents unwillingness to limit sexual health education to an abstinence-only message and a preference for comprehensive approaches including alternative strategies for reducing risk, such as cautious selection of partners, contraception, and condom use . Yet counseling about alternative strategies for risk reduction need not exclude a recommendation of abstinence from sexual intercourse as the most efficacious means of prevention. Medical students and physicians can educate adolescent patients who are considering becoming or already are sexually active about strengths and limitations of available means of prevention and also recommend abstinence, emphasizing that the recommendation is grounded not in moral condemnation, but in concern for protecting their health, and that the physician will continue to care for the patient whatever their decision. Some may argue that a recommendation for sexual abstinence is unlikely to be heeded and may alienate adolescents. Yet research suggests that adolescents appreciate honest and non-judgmental discussions with health care professionals . Low acceptance rates for recommendations to stop smoking, refrain from inordinate alcohol consumption, and obtain proper diet and exercise have not deterred health professionals from making artful and respectful science-based recommendations without alienating patients or communities. Medical school curricula for behavioral health vary widely in form and content, and the topic poses many pedagogical challenges . In the relatively noncontroversial case of smoking, guidelines are available that include a clear recommendation for smoking cessation or avoidance . Yet for fraught subjects such as sexuality, there is disagreement in society—reflected here among our entering medical students—about content of such recommendations. At the University of California, Davis, behavioral health recommendations arise naturally in the clinical setting and are also addressed in the longitudinal Doctoring course spanning the four-year curriculum . In Doctoring small-group sessions, students discuss cases and interview standardized patients, providing the opportunity to address behavioral health recommendations. Whereas students discuss recommendations for the individual cases, there is at present no overarching discussion addressing underlying principles of determining the content of recommendations. Such a discussion may not lead to full consensus on content, especially for controversial subjects such as sexuality, yet should spur thinking and a mindful, rather than automatic, approach to the patient. Important strengths for this study include its setting in a highly ranked U.S. medical school, high response rate (100 %), and focus on the substance of counseling offered by medical students as reflected in clinical vignettes for common clinical problems. The study has three important limitations. First, it is set in only one of the more than 150 accredited schools of medicine or osteopathic medicine in the U.S. University of California, Davis School of Medicine students were more likely to be women and Asian or Hispanic than the national population of entering medical students in 2009, yet they had similar mean grade-point average and Medical College Admission Test scores. It is possible that the different demographic characteristics of the students compared to the national population of U.S. medical students affected our results. For example, a large national study of U.S. medical students showed that women and non-Whites—groups over-represented in our students compared to nationally—were more likely to report counseling among general medicine patients . However, the magnitude of the differences in counseling frequency scores between groups was small—approximately 5 % between men and women and less than 10 % between the various ethnic groups comprising the respondents. Although we did not collect information from the respondents on ethnicity on our survey, respondent gender had no bearing on likelihood of recommending elimination of risk factors. Thus, it is likely that School of Medicine students and these results reflect a national rather than a regional perspective with respect to willingness to make behavioral health recommendations. Second, the study focused on entering medical students, and responses represent intention based on their education, values, and experience prior to beginning the medical curriculum. Although the students’ approach to counseling patients regarding behavioral risk factors may change as they progress through medical school and into practice, it is likely that the attitudes they bring at entrance will be influential, and medical school educators should be aware of this as they design relevant curriculum. Third, the clinical vignette format unavoidably imposes limitations that may affect response. For example, willingness to provide information and recommendations may vary according to the patient’s sex, perceived maturity, presence of other medical conditions, degree to which the patient is known to the caregiver, and circumstances of the clinic visit, none of which were indicated in these vignettes. While these factors may have affected overall willingness to provide information and recommendations, it is unlikely that they explain the marked reluctance to recommend against sexual intercourse in adolescents compared to the behavioral risk factors illustrated in the other three vignettes. This reticence may result from cultural characteristics attendant to the students’ early stage of professional development, a belief that sexual activity among adolescents carries only rare and inconsequential risks, conviction that making recommendations in this area is inappropriate or futile, or to personal discomfort with the topic.
Physicians are trusted sources of health-related information and advice. It remains the patient’s decision as to whether to heed that advice, but this does not mean that the physician becomes simply a neutral source of information, unwilling to recommend, as part of the shared decision-making conversation , that which science suggests is in their best health interest. Students showed high willingness to educate and respect patient autonomy. We observed high willingness to recommend elimination of risk behaviors for smoking, alcohol, and poor diet/exercise, but not for sexual intercourse in an adolescent. Further work should include research into understanding correlates of willingness to engage in preventive health counseling by health professionals and students, including effect on message content, and on improving skills and attitudes for promoting science-based health recommendations in a respectful and effective manner. Medical curriculum should include explicit discussion of content of recommendations, especially for fraught subjects such as sexuality where consensus may not occur, to promote thinking and a mindful approach to health promotion.
The author has no competing interests.
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6920/12/28/prepub
|
Variability and forensic efficiency of 12 X-STR markers in Namibian populations | c2eacf4e-fdfd-407b-852e-4da760e1320a | 11490519 | Forensic Medicine[mh] | The analysis of X-linked STR markers has proven useful in solving kinship cases involving females and incest, as well as for identification purposes when data on reference parents is missing . Several X-linked STRs have been identified, organized into four linkage groups with different degrees of linkage between markers . Among these markers, the forensic community has selected a subset, which has been extensively characterized in terms of population variation and forensic informativity . These markers have been also tested for their molecular efficiency when assembled in multiplex reactions . One of the amplification kit used for the analysis of X-linked STRs is the Investigator®Argus X-12 (Qiagen, Hilden, Germany) that allows the simultaneous amplification of 12 STR loci. Although population studies about X chromosome polymorphisms are widespread in the literature, data on haplotype frequencies is not extensively available . Moreover, an X-STRs open-access database is not present, beside the one originally developed by Szibor et al. which contains only four populations to date (German, Ghanaian, Japanese and Chinese). Finally, as often the case for genetic studies, African populations have been only minimally investigated so far . In order to tackle these issues, we genotyped a set of X-STRs in a group of population from Namibia in southern Africa, a region of the world particularly lacking data on X chromosome STRs . In doing so, we characterized the degree of forensic informativeness of these markers, reported some cases of dropout alleles and extend the database on known off-ladder alleles. We also evaluate the relevance for these markers for investigations focusing on the biogeographic origin of samples. Samples collection and genotyping Samples analysed in this work were collected in Namibia, whose population counts to about 2 700 000 inhabitants, living in an area of 823 145 km 2 . Namibia is a multi-ethnic country with 11 ethnic groups reported in the census, the majority belonging to communities speaking Bantu languages . The collection of the samples was approved by the Oxford Tropical Research Ethics Committee (OxTREC; OxTREC 49–09 and OxTREC 42–11) . The analyses involved 251 DNA samples collected from healthy male subjects living in Namibia, provided by the Department of Chemistry, Life Sciences and Environmental Sustainability, University of Parma, Parma, Italy. Collected samples belonged to individuals that self-identified as belonging to the following groups (number of analysed samples): Mbukushu (or Hambukushu; n = 59) and Ovambo ( n = 82), two Bantu-speaking populations and Xun ( n = 41) and Khwe ( n = 69), two KhoeSan-speaking populations. The anonymity of the samples was ensured by the use of alphanumeric codes and coded DNA samples were stored in the laboratory. The focus on male samples simplified the phasing of the X chromosome genotypes and the recovery of haplotypes. At the point of sampling participants were asked to confirm that, to the best of their knowledge, they were not related to people already sampled in the same location/sampling session. The Oragene® kit was used to collect samples and the genetic material was extracted following kit manufacturer’s instructions . Samples were quantified with the Quantifiler™ Trio DNA Quantification kit, plate was loaded into the 7500 Real-Time PCR thermal cycler and the results analyzed using the HID Real-Time PCR Analysis software . The Investigator® Argus X-12 kit (Qiagen, Hilden, Germany) was used to amplify the following X-linked loci (Linkage Group): DXS10148, DXS10135, DXS8378 (LG1); DXS7132, DXS10079, DXS10074 (LG2); DXS10103, HPRTB, DXS10101 (LG3); DXS10146, DXS10134, DXS7423 (LG4). DNA amplifications were performed following the kit manufacturer’s recommended protocols . Finally, PCR products were separated and detected on an ABI Genetic Analyzer 3500 xL using POP-4 polymer; alleles were called and binned by GeneMapper ID-X v1.4 software . Data analysis Intra- and inter-populations genetic diversity of the X-STR markers was estimated considering loci singularly or as part of one of the four LGs, as haplotypes. Alleles and haplotypes frequencies were calculated by counting alleles and haplotypes and dividing by the total number of samples analyzed. StatsX v2.0 software was used to calculate the following forensic efficiency parameters for loci considered singularly and in LGs: X chromosome haplotype diversity (HD), X-STRs markers’ power of discrimination (PD), polymorphism information content (PIC) and the mean exclusion chance (MEC) . Pairwise testing of Linkage Disequilibrium (LD) (significance threshold: 0.05) and genetic distances parameters to other populations were estimated with the Arlequin v3.5.2 software . The degree of LD between loci was measured within populations, to avoid the impact of the specific evolutionary history of each population on others. Slatkin’s Fst was estimated as measure of genetic distance between populations, using haplotype frequencies. Distances were calculated between the set of Namibian populations here investigated for the first time and eight additional populations from Europe, Asia and Africa, available in the literature (see Table ). Distances between populations were calculated for each of the four X-STRs linkage groups and graphically represented through Neighbour-Joining (NJ) trees generated using Mega X v. 11.0.13 software . All calculations were performed using the default settings of the programs. Haplotype sharing between the considered populations (Namibian and others) was explored, to evaluate the potential informativity of X haplotypes in identifying the biogeographic origin of an individual for investigative purposes. The haplotype sharing function present in Arlequin v3.5.2 software was used for this purpose. Samples analysed in this work were collected in Namibia, whose population counts to about 2 700 000 inhabitants, living in an area of 823 145 km 2 . Namibia is a multi-ethnic country with 11 ethnic groups reported in the census, the majority belonging to communities speaking Bantu languages . The collection of the samples was approved by the Oxford Tropical Research Ethics Committee (OxTREC; OxTREC 49–09 and OxTREC 42–11) . The analyses involved 251 DNA samples collected from healthy male subjects living in Namibia, provided by the Department of Chemistry, Life Sciences and Environmental Sustainability, University of Parma, Parma, Italy. Collected samples belonged to individuals that self-identified as belonging to the following groups (number of analysed samples): Mbukushu (or Hambukushu; n = 59) and Ovambo ( n = 82), two Bantu-speaking populations and Xun ( n = 41) and Khwe ( n = 69), two KhoeSan-speaking populations. The anonymity of the samples was ensured by the use of alphanumeric codes and coded DNA samples were stored in the laboratory. The focus on male samples simplified the phasing of the X chromosome genotypes and the recovery of haplotypes. At the point of sampling participants were asked to confirm that, to the best of their knowledge, they were not related to people already sampled in the same location/sampling session. The Oragene® kit was used to collect samples and the genetic material was extracted following kit manufacturer’s instructions . Samples were quantified with the Quantifiler™ Trio DNA Quantification kit, plate was loaded into the 7500 Real-Time PCR thermal cycler and the results analyzed using the HID Real-Time PCR Analysis software . The Investigator® Argus X-12 kit (Qiagen, Hilden, Germany) was used to amplify the following X-linked loci (Linkage Group): DXS10148, DXS10135, DXS8378 (LG1); DXS7132, DXS10079, DXS10074 (LG2); DXS10103, HPRTB, DXS10101 (LG3); DXS10146, DXS10134, DXS7423 (LG4). DNA amplifications were performed following the kit manufacturer’s recommended protocols . Finally, PCR products were separated and detected on an ABI Genetic Analyzer 3500 xL using POP-4 polymer; alleles were called and binned by GeneMapper ID-X v1.4 software . Intra- and inter-populations genetic diversity of the X-STR markers was estimated considering loci singularly or as part of one of the four LGs, as haplotypes. Alleles and haplotypes frequencies were calculated by counting alleles and haplotypes and dividing by the total number of samples analyzed. StatsX v2.0 software was used to calculate the following forensic efficiency parameters for loci considered singularly and in LGs: X chromosome haplotype diversity (HD), X-STRs markers’ power of discrimination (PD), polymorphism information content (PIC) and the mean exclusion chance (MEC) . Pairwise testing of Linkage Disequilibrium (LD) (significance threshold: 0.05) and genetic distances parameters to other populations were estimated with the Arlequin v3.5.2 software . The degree of LD between loci was measured within populations, to avoid the impact of the specific evolutionary history of each population on others. Slatkin’s Fst was estimated as measure of genetic distance between populations, using haplotype frequencies. Distances were calculated between the set of Namibian populations here investigated for the first time and eight additional populations from Europe, Asia and Africa, available in the literature (see Table ). Distances between populations were calculated for each of the four X-STRs linkage groups and graphically represented through Neighbour-Joining (NJ) trees generated using Mega X v. 11.0.13 software . All calculations were performed using the default settings of the programs. Haplotype sharing between the considered populations (Namibian and others) was explored, to evaluate the potential informativity of X haplotypes in identifying the biogeographic origin of an individual for investigative purposes. The haplotype sharing function present in Arlequin v3.5.2 software was used for this purpose. Alleles/haplotypes frequencies, out of ladder and bi-allelic patterns Genotyping of the 251 samples resulted in 242 complete profiles (50 Mbukushu , 41 Xun , 69 Khwe and 82 Ovambo ). The missing 9 samples did not provide any amplification after multiple attempts and were not included. Several samples showed at least one out of ladder allele (OL), defined as any allele not included in the reference allelic set provided by the kit manufacturer. Some of these OL had been previously reported . The newly identified ones are listed in Table . OL allele assignation was performed in accordance with their molecular weight. Some of the OL alleles were present in more than one population: the allele 8.1 at locus DXS7423 occurred in both the Mbukushu and the Xun and the alleles 28.3 and 29.3 at locus DXS10135 were shared between Xun and Khwe . None of the three newly identified alleles present in the Ovambo was shared with any of the other three populations. The two KhoeSan speaking populations ( Xun and Khwe ) are the ones where most of these newly identified alleles were detected, (5 unknown OL out of 11 total unknown OL, in both) (Table ). Nine bi-allelic genotypes were observed at seven loci, two presenting the same alleles at locus DXS10101 in the Xun and two with different alleles at the same locus in different populations (DXS10101, Xun and Ovambo) (Table ). The full set of allele frequencies in the four populations are reported in supplementary material (intermediate alleles with an incomplete repeat are reported without highlighting the incomplete allele, e.g. allele 13.3 was presented as 133) (Fig. ). Descriptive parameters concerning haplotype frequencies for each of the LGs are provided in Table . LG1 has therefore the highest potential to generate both different alleles and haplotypes. Only for the Mbukushu population, LG1 identifies a smaller number of haplotypes than LG4. Forensic efficiency parameters Forensic efficiency parameters for the individual X-STR markers and for linkage group were separately evaluated using the StatsX software (Fig. and Tabs. -SM). No major differences are evident across populations (Fig. ). Note that since the StatsX software deletes all incomplete profiles, parameters were computed on a total of N = 35 samples for the Mbukushu population, N = 24 for the Xun population, N = 48 samples for the Khwe population and N = 54 for the Owambo population. Linkage disequilibrium (LD) Results of linkage disequilibrium tests are shown in supplementary material (Tab. ). Overall, the results confirmed the subdivision of the 12 loci into four linkage groups, with some pairs in each LG showing no significant association in the different populations. However, lack of significant association could be simply due to the small sample size analyzed in each population. It is noteworthy the presence of LD between markers belonging to different LGs, unexpected by the physical localization of the makers on the X-chromosome. These observations were more common in the Khwe and Owambo populations and involved more often makers in LG1 and LG3. LG genetic distances and haplotype distribution The four Namibian populations ( Mbukushu -MBU, Xun -XUN, Khwe -KHW and Owambo -OWA) were compared to each other and to a set of worldwide reference populations ( N = 8, Eritrea–ERI; Ethiopia–ETH; Somalia–SOM; Cape Verde–CAP; Guinea Bissau–GUI; Germany–GER; China–CHI; Philippines–PHI) using Slatkin linearized Fst. Distances were calculated using Arlequin v3.5.2 software for each linkage group, separately (Tab. ). Distance matrices were used to build Neighbour-Joining (NJ) trees with MegaX v. 11.0.13 software (Fig. ). The pair of populations showing the largest distance in each LG were the Mbukushu and Xun for LG1 (Fst = 0.01441), Khwe and Eritrea for LG2 (Fst = 0.02047), Khwe and Xun for LG3 and LG4 (Fst = 0.01712 and 0.01660, respectively). Of the four LGs, LG1 appears as the LG group showing the best fitting between geography and genetics. The NJ dendogram separates Africans and non-Africans (in accordance with the Out of Africa model for the origin of Homo sapiens ), groups Eastern and Western-Southern Africans separately and places the two Asian groups closer to each other (Fig. ). Some of these patterns are also present in the trees based upon the other LGs, but never all together (Fig. ). Population specific haplotypes and haplotypes sharing Considering the results based on the trees, we explored the degree of haplotype sharing across all the populations for the 4 LGs. Haplotypes distribution and patterns of shared haplotypes are listed in the supplementary material. Namibian populations analysis was carried out only considering complete haplotypes for each LG, excluding haplotypes in which one or more markers had a missing value. LG1 and LG4 generally presented a greater number of population-specific haplotype than LG2 and LG3. Both the percentage of the population-specific haplotypes (PSHh, estimated out of the total number of haplotypes for each population) and the percentage of individuals presenting a specific haplotype (PSHi, estimated out of the total number of individuals for a given population) were calculated. The population with the greater number of specific haplotypes was Germany (for all LGs) while the one that presented a lower number was Xun population for both LG1 and LG3, Khwe population for LG2, Mbukushu and Philippines populations for the LG4. As expected the number of novel haplotype increases with the number of tested individuals, until a plateu is reached when large datasets are tested. Our results confirmed the presence of a clear correlation between the number of haplotypes and the number of different individuals (Fig. ). Interestingly, although the relatively smaller sample size of the Namibian populations compared to the reference dataset, PSHh and PSHi values in these populations were similar to those in the reference populations (values were lower than 15% for both LG2 and LG3 and between 20–40% for LG4, see “%Hapl” in the ). LG1 values differed the most between Namibia populations and the others: values ranged within 40–60% in Namibian groups while were below 40% in the reference populations (except for Guinea). Genotyping of the 251 samples resulted in 242 complete profiles (50 Mbukushu , 41 Xun , 69 Khwe and 82 Ovambo ). The missing 9 samples did not provide any amplification after multiple attempts and were not included. Several samples showed at least one out of ladder allele (OL), defined as any allele not included in the reference allelic set provided by the kit manufacturer. Some of these OL had been previously reported . The newly identified ones are listed in Table . OL allele assignation was performed in accordance with their molecular weight. Some of the OL alleles were present in more than one population: the allele 8.1 at locus DXS7423 occurred in both the Mbukushu and the Xun and the alleles 28.3 and 29.3 at locus DXS10135 were shared between Xun and Khwe . None of the three newly identified alleles present in the Ovambo was shared with any of the other three populations. The two KhoeSan speaking populations ( Xun and Khwe ) are the ones where most of these newly identified alleles were detected, (5 unknown OL out of 11 total unknown OL, in both) (Table ). Nine bi-allelic genotypes were observed at seven loci, two presenting the same alleles at locus DXS10101 in the Xun and two with different alleles at the same locus in different populations (DXS10101, Xun and Ovambo) (Table ). The full set of allele frequencies in the four populations are reported in supplementary material (intermediate alleles with an incomplete repeat are reported without highlighting the incomplete allele, e.g. allele 13.3 was presented as 133) (Fig. ). Descriptive parameters concerning haplotype frequencies for each of the LGs are provided in Table . LG1 has therefore the highest potential to generate both different alleles and haplotypes. Only for the Mbukushu population, LG1 identifies a smaller number of haplotypes than LG4. Forensic efficiency parameters for the individual X-STR markers and for linkage group were separately evaluated using the StatsX software (Fig. and Tabs. -SM). No major differences are evident across populations (Fig. ). Note that since the StatsX software deletes all incomplete profiles, parameters were computed on a total of N = 35 samples for the Mbukushu population, N = 24 for the Xun population, N = 48 samples for the Khwe population and N = 54 for the Owambo population. Results of linkage disequilibrium tests are shown in supplementary material (Tab. ). Overall, the results confirmed the subdivision of the 12 loci into four linkage groups, with some pairs in each LG showing no significant association in the different populations. However, lack of significant association could be simply due to the small sample size analyzed in each population. It is noteworthy the presence of LD between markers belonging to different LGs, unexpected by the physical localization of the makers on the X-chromosome. These observations were more common in the Khwe and Owambo populations and involved more often makers in LG1 and LG3. The four Namibian populations ( Mbukushu -MBU, Xun -XUN, Khwe -KHW and Owambo -OWA) were compared to each other and to a set of worldwide reference populations ( N = 8, Eritrea–ERI; Ethiopia–ETH; Somalia–SOM; Cape Verde–CAP; Guinea Bissau–GUI; Germany–GER; China–CHI; Philippines–PHI) using Slatkin linearized Fst. Distances were calculated using Arlequin v3.5.2 software for each linkage group, separately (Tab. ). Distance matrices were used to build Neighbour-Joining (NJ) trees with MegaX v. 11.0.13 software (Fig. ). The pair of populations showing the largest distance in each LG were the Mbukushu and Xun for LG1 (Fst = 0.01441), Khwe and Eritrea for LG2 (Fst = 0.02047), Khwe and Xun for LG3 and LG4 (Fst = 0.01712 and 0.01660, respectively). Of the four LGs, LG1 appears as the LG group showing the best fitting between geography and genetics. The NJ dendogram separates Africans and non-Africans (in accordance with the Out of Africa model for the origin of Homo sapiens ), groups Eastern and Western-Southern Africans separately and places the two Asian groups closer to each other (Fig. ). Some of these patterns are also present in the trees based upon the other LGs, but never all together (Fig. ). Considering the results based on the trees, we explored the degree of haplotype sharing across all the populations for the 4 LGs. Haplotypes distribution and patterns of shared haplotypes are listed in the supplementary material. Namibian populations analysis was carried out only considering complete haplotypes for each LG, excluding haplotypes in which one or more markers had a missing value. LG1 and LG4 generally presented a greater number of population-specific haplotype than LG2 and LG3. Both the percentage of the population-specific haplotypes (PSHh, estimated out of the total number of haplotypes for each population) and the percentage of individuals presenting a specific haplotype (PSHi, estimated out of the total number of individuals for a given population) were calculated. The population with the greater number of specific haplotypes was Germany (for all LGs) while the one that presented a lower number was Xun population for both LG1 and LG3, Khwe population for LG2, Mbukushu and Philippines populations for the LG4. As expected the number of novel haplotype increases with the number of tested individuals, until a plateu is reached when large datasets are tested. Our results confirmed the presence of a clear correlation between the number of haplotypes and the number of different individuals (Fig. ). Interestingly, although the relatively smaller sample size of the Namibian populations compared to the reference dataset, PSHh and PSHi values in these populations were similar to those in the reference populations (values were lower than 15% for both LG2 and LG3 and between 20–40% for LG4, see “%Hapl” in the ). LG1 values differed the most between Namibia populations and the others: values ranged within 40–60% in Namibian groups while were below 40% in the reference populations (except for Guinea). X chromosome drop-outs, multi-allelic loci and out of ladder alleles in Namibian populations The analysis of X-STR markers using the Investigator Argus X-12 kit (Qiagen, Hilden, Germany) in four Nambian populations resulted in several cases of allele drop-out (DO), in markers DXS10148, DXS10101, DXS10146, DXS10135, DXS7132 and DXS10079. Drop-outs can occur when nucleotide variants are present in the primers binding sites or when samples present DNA degradation . However, the low degradation index estimated for the samples showing DO events estimated through the ratio small/large autosomal probes using Quantifiler™ Trio DNA Quantification kit suggests a variation in the primer binding region as the most plausible explanation for the observed DO events. Bi-allelic patterns were observed for across populations at different loci: the DXS10134 and DXS10148 about Mbukushu samples, DXS10101 about both Xun and Owambo populations, DXS10103, DXS8378, DXS10079 and HPRTB markers for the Khwe population. Bi-allelic patterns could be the result of amplification or typing process artifacts or else they could represent a mosaicism condition. Bi- and tri-allelic patterns in the X-STR loci were already been described in the literature . Several out-of-ladder (OL) alleles were detected, not uncommon phenomenon when using the Investigator Argus X-12 . A subset of these were observed here for the first time (Table ). Forensic efficiency The most polymorphic and informative marker for all the four Namibian population was DXS10135 (PIC Mbukushu = 0.9172 with 21 several alleles, PIC Khwe = 0.9076 with 23 several alleles, PIC Xun = 0.9027 with 20 several alleles, PIC Owambo = 0.9363 with 25 several alleles) while the less informative and polymorphic marker was the DXS7423 in the Mbukushu (PIC = 0.6165 with 5 alleles), Khwe (PIC = 0.6169 with 5 alleles) and Owambo (PIC = 0.5422 with 5 alleles) and DXS8378 marker in the Xun population (PIC = 0.4510 with 6 several alleles). These observations are in accordance with data in the literature . There were no major differences between parameters estimated across the four linkage groups, all very close to the maximum value of 1. Overall, the obtained results confirmed the forensic informativeness of the 12 X-STR markers in the studied populations. Population genetics analysis The Linkage Disequilibrium tests supported the assemblage in four linkage groups of the 12 X-STR markers, with some observations of lack of linkage within LGs and presence of linkage across LGs (Tab. ). Population sub-structure, absence of random mating and genetic drift are all possible evolutionary scenarios explaining these discrepancies . On the other hand, these observations could be the result of stochastic effects due to limited size of our samples. Notable, the presence of significant LD between markers DXS10135 (LG1) and DXS7423 (LG4) localized at the X-chromosome opposite ends (Xp22.31 and Xq28 positions, respectively) has been already reported . However, it is worth mentioning that, despite early observations , recombination events between associated markers and incomplete independence between markers belonging to different LG have been extensively reported . Across the phylogenetic trees built using genetic distances between haplotypes for each LG, the one referring to LG1 data was the one that was closest to the real biogeographic distribution of the considered populations. In fact, African and Non-African populations were associated to two different branches, the two Asian populations close to each other (PHI and CHI) and African populations were further subdivided into Southern Africa (MBU, OWA, KHW, XUN), Eastern Africa (ERI, ETH, SOM) and Western Africa (CAP and GUI). About LG1 tree, Eastern Africa populations were phylogenetically close together as well as two of the study populations ( Owambo and Mbukushu ). On the other hand, in the LG2 tree we noted a populations subdivision in a cluster that showed a different distribution compared to the real one: a single group included Germany and Xun while Khwe and Mbukushu were phylogenetically quite far, such as Eritrea, than the others. This could be the effect of a genetic drift that involved these ethnic groups. Concerning to the LG3 tree, two of the Southern Africa populations (OWA and XUN) formed a single cluster thus highlighting their phylogenetic closeness unlike the Khwe population, which was slightly distant from these and close to the Mbukushu . In the same tree, we noted some clusters clearly not steady with the populations geographical distribution such as the German/Ethiopia/Eritrea phylogenetic association. Finally, in the LG4 tree East Africa populations (ERI, ETH and SOM) were phylogenetically close together as well as those belonging to West Africa (CAP and GUI). Moreover, Owambo population (Southern Africa) appeared phylogenetically close to the West Africa populations. On the contrary, Xun and Khwe populations (Southern Africa) were far from each other and also from all the others showing two separate clusters due to a genetic drift effect that involved them, probably . In all cases the Asian populations (CHI and PHI) were placed within the same cluster. Therefore, populations genetic non-homogeneity emerged both by the results and our considerations, probably due to the high intra-population inbreeding levels: hence, the need and the importance to generate population-specific databases . Haplotype sharing and biogeographic origin identification The study of X-STR markers is known to be a useful tool for identifying the geographical origin of a biological sample donor . In a forensic scene, this is very important especially in forensic cases where any additional information could be crucial in the characterization of the origin of the biological material. The X-typing of the 251 male samples and the comparison of the genotypes with reference databases allowed us to identify a list of population-specific haplotypes for each considered population. In general, LG1 was the one with a greater number of population specific haplotypes (PSHi range: 60–19% PSHh range: 69–22%) followed by the LG4 > LG2 > LG3. Overall LG based haplotypes appear to have the potential for application related to the determination of the geographical origin of individuals whose origin are unknown, but further analyses specifically testing for the degree of bio-geographical association of LG haplotypes are necessary for their routine application in the forensic context. The analysis of X-STR markers using the Investigator Argus X-12 kit (Qiagen, Hilden, Germany) in four Nambian populations resulted in several cases of allele drop-out (DO), in markers DXS10148, DXS10101, DXS10146, DXS10135, DXS7132 and DXS10079. Drop-outs can occur when nucleotide variants are present in the primers binding sites or when samples present DNA degradation . However, the low degradation index estimated for the samples showing DO events estimated through the ratio small/large autosomal probes using Quantifiler™ Trio DNA Quantification kit suggests a variation in the primer binding region as the most plausible explanation for the observed DO events. Bi-allelic patterns were observed for across populations at different loci: the DXS10134 and DXS10148 about Mbukushu samples, DXS10101 about both Xun and Owambo populations, DXS10103, DXS8378, DXS10079 and HPRTB markers for the Khwe population. Bi-allelic patterns could be the result of amplification or typing process artifacts or else they could represent a mosaicism condition. Bi- and tri-allelic patterns in the X-STR loci were already been described in the literature . Several out-of-ladder (OL) alleles were detected, not uncommon phenomenon when using the Investigator Argus X-12 . A subset of these were observed here for the first time (Table ). The most polymorphic and informative marker for all the four Namibian population was DXS10135 (PIC Mbukushu = 0.9172 with 21 several alleles, PIC Khwe = 0.9076 with 23 several alleles, PIC Xun = 0.9027 with 20 several alleles, PIC Owambo = 0.9363 with 25 several alleles) while the less informative and polymorphic marker was the DXS7423 in the Mbukushu (PIC = 0.6165 with 5 alleles), Khwe (PIC = 0.6169 with 5 alleles) and Owambo (PIC = 0.5422 with 5 alleles) and DXS8378 marker in the Xun population (PIC = 0.4510 with 6 several alleles). These observations are in accordance with data in the literature . There were no major differences between parameters estimated across the four linkage groups, all very close to the maximum value of 1. Overall, the obtained results confirmed the forensic informativeness of the 12 X-STR markers in the studied populations. The Linkage Disequilibrium tests supported the assemblage in four linkage groups of the 12 X-STR markers, with some observations of lack of linkage within LGs and presence of linkage across LGs (Tab. ). Population sub-structure, absence of random mating and genetic drift are all possible evolutionary scenarios explaining these discrepancies . On the other hand, these observations could be the result of stochastic effects due to limited size of our samples. Notable, the presence of significant LD between markers DXS10135 (LG1) and DXS7423 (LG4) localized at the X-chromosome opposite ends (Xp22.31 and Xq28 positions, respectively) has been already reported . However, it is worth mentioning that, despite early observations , recombination events between associated markers and incomplete independence between markers belonging to different LG have been extensively reported . Across the phylogenetic trees built using genetic distances between haplotypes for each LG, the one referring to LG1 data was the one that was closest to the real biogeographic distribution of the considered populations. In fact, African and Non-African populations were associated to two different branches, the two Asian populations close to each other (PHI and CHI) and African populations were further subdivided into Southern Africa (MBU, OWA, KHW, XUN), Eastern Africa (ERI, ETH, SOM) and Western Africa (CAP and GUI). About LG1 tree, Eastern Africa populations were phylogenetically close together as well as two of the study populations ( Owambo and Mbukushu ). On the other hand, in the LG2 tree we noted a populations subdivision in a cluster that showed a different distribution compared to the real one: a single group included Germany and Xun while Khwe and Mbukushu were phylogenetically quite far, such as Eritrea, than the others. This could be the effect of a genetic drift that involved these ethnic groups. Concerning to the LG3 tree, two of the Southern Africa populations (OWA and XUN) formed a single cluster thus highlighting their phylogenetic closeness unlike the Khwe population, which was slightly distant from these and close to the Mbukushu . In the same tree, we noted some clusters clearly not steady with the populations geographical distribution such as the German/Ethiopia/Eritrea phylogenetic association. Finally, in the LG4 tree East Africa populations (ERI, ETH and SOM) were phylogenetically close together as well as those belonging to West Africa (CAP and GUI). Moreover, Owambo population (Southern Africa) appeared phylogenetically close to the West Africa populations. On the contrary, Xun and Khwe populations (Southern Africa) were far from each other and also from all the others showing two separate clusters due to a genetic drift effect that involved them, probably . In all cases the Asian populations (CHI and PHI) were placed within the same cluster. Therefore, populations genetic non-homogeneity emerged both by the results and our considerations, probably due to the high intra-population inbreeding levels: hence, the need and the importance to generate population-specific databases . The study of X-STR markers is known to be a useful tool for identifying the geographical origin of a biological sample donor . In a forensic scene, this is very important especially in forensic cases where any additional information could be crucial in the characterization of the origin of the biological material. The X-typing of the 251 male samples and the comparison of the genotypes with reference databases allowed us to identify a list of population-specific haplotypes for each considered population. In general, LG1 was the one with a greater number of population specific haplotypes (PSHi range: 60–19% PSHh range: 69–22%) followed by the LG4 > LG2 > LG3. Overall LG based haplotypes appear to have the potential for application related to the determination of the geographical origin of individuals whose origin are unknown, but further analyses specifically testing for the degree of bio-geographical association of LG haplotypes are necessary for their routine application in the forensic context. The analysis of the 12 STRs loci in the four Namibian populations confirmed the forensic informativeness of these markers. The identification of several drops out, OL alleles and biallelic loci confirms the need to extend the survey of genetic variation to other populations beyond Europe. Our work extends the set of population data from Africa, with a particular relevance for Southern Africa, a geographic region with still very limited X-STR data. We are aware that the population sample analyzed is relatively small. However, being one of the few investigations of X-STRs in populations from Southern Africa, we believe that it represents a significant contribution to the general goal of the forensic community of implementing representative reference databases of all human populations. Given that an updated X-STR database is not yet available, it is highly desirable to implement one, either by developing it from scratch or by extending STR repositories already available (i.e. NIST STRBase or STRidER). Below is the link to the electronic supplementary material. Supplementary file1 (XLSX 1473 KB) |
Strabismus surgery in topical anaesthesia with intraoperative suture adjustment in Graves' orbitopathy | 21bc286a-dbd8-46aa-ab71-fc48ea9259b6 | 11810551 | Surgical Procedures, Operative[mh] | INTRODUCTION Graves' disease is an autoimmune disorder affecting the thyroid gland, which in most cases causes hyperthyroidism. The incidence in Northern Europe is reported to be 21/100.000, with a four‐fold higher incidence in females compared to males (Abraham‐Nordling et al. ). Approximately, one in five patients with Graves' disease develops Graves' orbitopathy (GO), with involvement of orbital tissue, including the extraocular muscles. This often leads to disfiguring proptosis and diplopia (Shan & Douglas ). GO has an active first phase, characterized by inflammation of the orbital tissues, leading to swelling and exophthalmos. In the second phase, the inflammation subsides, followed by proliferation of connective tissue, leading to fibrosis of the extraocular muscles. This changes the properties of the extraocular muscles profoundly, with loss of elasticity, eventually leading to shortening/scarring. This in turn causes misalignment and severe restriction of ocular motility, which manifests in the patient as diplopia. In case of disfiguring exophthalmos, orbital decompression may be indicated. If the exophthalmos is moderate, removal of the lateral wall is most often the chosen procedure, as this surgery seldom affects the ocular alignment. However, in some patients, the orbitopathy may be sight‐threatening, either due to compression of the optic nerve, or because the severe exophthalmos leads to exposure keratopathy, corneal ulceration and keratitis. In these cases, more radical surgery is necessary in order to expand the orbital space maximally, often by removing the infero‐medial orbital wall. However, medial orbital wall decompression changes the orbital anatomy considerably, which nearly always causes a further worsening of the eye alignment (Zloto et al. ). Apart from the disfiguring and sometimes vision‐threatening exophthalmos, the misalignment and eye motility restriction are the most challenging parts of the GO. Whereas decompression surgery causes the greatest improvement in appearance scores in quality of life (QoL)‐studies, strabismus surgery and the elimination of diplopia yield the highest scores on visual functioning (Woo et al. ). Due to the loss of normal elasticity of the extraocular muscles in GO, the normal dose–response relationship for strabismus surgery is fundamentally changed, and the usual dosage tables for strabismus surgery are not appropriate. However, some authors have proposed specific dosage‐response tables for strabismus surgery in GO patients (Lyu et al. , Akbari et al., ). These patients are usually operated in general anaesthesia, with recessions on adjustable sutures according to a pre‐operatively planned dosage, and then adjusted a couple of hours after surgery (Pratt‐Johnson & Tillson ). Others have reported good results by reattaching the muscle at the intraoperative relaxed position, and not using adjustable sutures (Sarici et al. ). In 1981, Boergen first reported the use of topical anaesthesia with intraoperative suture adjustment in this patient group. Since this first paper, he and his group have published several papers on this technique (Boergen , Kalpadakis et al. , Kalpadakis et al. ). We have used this technique in most GO patients at Haukeland University Hospital for more than 20 years. The aim of this study was to evaluate the results of strabismus surgery in these patients. Our main outcomes were intraoperative complications, frequency of reoperations, and the presence/absence of post‐operative diplopia in the primary position and down‐gaze. MATERIALS AND METHODS 2.1 Patients We have retrospectively examined the medical records of all patients with GO who underwent first‐time strabismus surgery at our department during the years 2014–2021. In total, 45 patients were included. Haukeland University Hospital is the main hospital for the Western health region of Norway (approximately 1.1 million inhabitants), primarily serving the Western part of the country. However, for many years, Graves' disease has been a field of special interest and clinical research in our department. Hence, due to the complexity of the strabismus in GO, many patients are referred to our department also from other health regions of Norway. During the study period, 14 patients (31%) came from outside our health region. In 26 patients (58%), orbital decompression had been performed prior to strabismus surgery. Due to large deviations and the need for bilateral surgery, general anaesthesia was chosen for three patients. One additional patient insisted on general anaesthesia, while another three were operated on retrobulbar anaesthesia as they were reluctant to the topical anaesthesia procedure. In one case, the surgery started in topical anaesthesia but had to be converted to retrobulbar anaesthesia due to insufficient pain relief. Thus, 37 patients (82%) of all the referred Graves' strabismus patients underwent surgery in topical anaesthesia. One of these patients had less than 2 months of post‐operative follow‐up and was therefore excluded from the study. The other 36 patients comprise the final study population (Figure ). 2.2 Clinical data The following data were extracted from the medical records: gender, age of onset for Graves' disease, age of onset for eye symptoms (GO), any decompression surgery, direction of strabismus, pre‐operative angle of deviation, pre‐operative diplopia, age at first strabismus surgery, surgical procedure, which eye and which muscle operated, intraoperative complications, post‐operative angle of deviation and presence or absence of diplopia at the routine post‐operative control examination, presence or absence of diplopia at the last control examination, the need of additional surgery and follow‐up time (from surgery to last control examination). 2.3 Surgical routines and technique The following describes our routines when performing strabismus surgery in Graves' patients. At the pre‐operative examination, the patient is given thorough information about the procedure and the importance of their cooperation. In addition, the patient is also informed that post‐operative use of prisms may be necessary, as well as the possibility of further surgery. On the day of surgery, prior to the surgical procedure, the patients are offered 15 mg of oxazepam, and an intravenous catheter is placed on the forearm. The technique of strabismus surgery in topical anaesthesia with intraoperative adjustment is carried out with the patient lying down at the operating table. Both eyes are uncovered, and oxybuprocain drops are given shortly before starting the operation, and repeated if necessary, during surgery. The patients are always monitored with electrocardiography during surgery, and atropine for intravenous injection is easily available in the operating room, due to the possibility of triggering of the oculo‐cardiac reflex. The conjunctiva is opened with a standard limbal approach, and the selected extraocular muscle is exposed with a muscle hook. A double‐armed Vicryl 6–0 suture is placed just behind the muscle insertion, the muscle is detached and a preliminary recession with a hang‐back technique is performed. A cover test is then carried out while the patient is encouraged to fixate a light source in the primary position. If still a fixational movement, we readjust the suture and repeat the cover test. We also take care to do the same examination in downgaze. When doing recessions on the inferior rectus, special care is observed for the downgaze position in order not to overcorrect. In these cases, we accept some hypotropia in the primary position. When no or little fixational movement, the knot is permanently tied. The conjunctiva is closed with Vicryl 7‐0 Rapid and the eye is covered with chloramphenicol ointment and an eye dressing. 2.4 Post‐operative follow‐up The patients are shortly examined the day after surgery, and a 7‐day prescription of antibiotic eye drops (or eventually combined with steroids) is given. The results are evaluated about 3 months post‐operatively. Due to the complexity of the strabismus in GO, as well as the need for additional lid surgery, many of the patients in the present study have been followed for a longer time, either at the outpatient clinic in our department or at other eye departments, after the strabismus surgery. Thus, in addition to the post‐operative results of the 3 months' examination, we have also recorded the results of the orthoptic examination at the last visit, whether at our department or at the local eye department. Median follow‐up time in the present study was 22 months (range 2–74). 2.5 Ethics The study was approved by the Regional Committee for Medical and Health Research Ethics, Western Norway, as a quality improvement study (ref. 491 136). 2.6 Statistical analyses The data were analysed using the Statistical Package for the Social Sciences (SPSS Version 26.0; IBM Corporation, Armonk, NY, USA). Continuous parametric data were reported as median (range). Patients We have retrospectively examined the medical records of all patients with GO who underwent first‐time strabismus surgery at our department during the years 2014–2021. In total, 45 patients were included. Haukeland University Hospital is the main hospital for the Western health region of Norway (approximately 1.1 million inhabitants), primarily serving the Western part of the country. However, for many years, Graves' disease has been a field of special interest and clinical research in our department. Hence, due to the complexity of the strabismus in GO, many patients are referred to our department also from other health regions of Norway. During the study period, 14 patients (31%) came from outside our health region. In 26 patients (58%), orbital decompression had been performed prior to strabismus surgery. Due to large deviations and the need for bilateral surgery, general anaesthesia was chosen for three patients. One additional patient insisted on general anaesthesia, while another three were operated on retrobulbar anaesthesia as they were reluctant to the topical anaesthesia procedure. In one case, the surgery started in topical anaesthesia but had to be converted to retrobulbar anaesthesia due to insufficient pain relief. Thus, 37 patients (82%) of all the referred Graves' strabismus patients underwent surgery in topical anaesthesia. One of these patients had less than 2 months of post‐operative follow‐up and was therefore excluded from the study. The other 36 patients comprise the final study population (Figure ). Clinical data The following data were extracted from the medical records: gender, age of onset for Graves' disease, age of onset for eye symptoms (GO), any decompression surgery, direction of strabismus, pre‐operative angle of deviation, pre‐operative diplopia, age at first strabismus surgery, surgical procedure, which eye and which muscle operated, intraoperative complications, post‐operative angle of deviation and presence or absence of diplopia at the routine post‐operative control examination, presence or absence of diplopia at the last control examination, the need of additional surgery and follow‐up time (from surgery to last control examination). Surgical routines and technique The following describes our routines when performing strabismus surgery in Graves' patients. At the pre‐operative examination, the patient is given thorough information about the procedure and the importance of their cooperation. In addition, the patient is also informed that post‐operative use of prisms may be necessary, as well as the possibility of further surgery. On the day of surgery, prior to the surgical procedure, the patients are offered 15 mg of oxazepam, and an intravenous catheter is placed on the forearm. The technique of strabismus surgery in topical anaesthesia with intraoperative adjustment is carried out with the patient lying down at the operating table. Both eyes are uncovered, and oxybuprocain drops are given shortly before starting the operation, and repeated if necessary, during surgery. The patients are always monitored with electrocardiography during surgery, and atropine for intravenous injection is easily available in the operating room, due to the possibility of triggering of the oculo‐cardiac reflex. The conjunctiva is opened with a standard limbal approach, and the selected extraocular muscle is exposed with a muscle hook. A double‐armed Vicryl 6–0 suture is placed just behind the muscle insertion, the muscle is detached and a preliminary recession with a hang‐back technique is performed. A cover test is then carried out while the patient is encouraged to fixate a light source in the primary position. If still a fixational movement, we readjust the suture and repeat the cover test. We also take care to do the same examination in downgaze. When doing recessions on the inferior rectus, special care is observed for the downgaze position in order not to overcorrect. In these cases, we accept some hypotropia in the primary position. When no or little fixational movement, the knot is permanently tied. The conjunctiva is closed with Vicryl 7‐0 Rapid and the eye is covered with chloramphenicol ointment and an eye dressing. Post‐operative follow‐up The patients are shortly examined the day after surgery, and a 7‐day prescription of antibiotic eye drops (or eventually combined with steroids) is given. The results are evaluated about 3 months post‐operatively. Due to the complexity of the strabismus in GO, as well as the need for additional lid surgery, many of the patients in the present study have been followed for a longer time, either at the outpatient clinic in our department or at other eye departments, after the strabismus surgery. Thus, in addition to the post‐operative results of the 3 months' examination, we have also recorded the results of the orthoptic examination at the last visit, whether at our department or at the local eye department. Median follow‐up time in the present study was 22 months (range 2–74). Ethics The study was approved by the Regional Committee for Medical and Health Research Ethics, Western Norway, as a quality improvement study (ref. 491 136). Statistical analyses The data were analysed using the Statistical Package for the Social Sciences (SPSS Version 26.0; IBM Corporation, Armonk, NY, USA). Continuous parametric data were reported as median (range). RESULTS 3.1 Patient characteristics Pre‐operative clinical characteristics of the patients in the study group are given in Table . As shown, most of the patients had hypotropia (median pre‐operative angle of deviation 27.5 prism dioptres, range 8–45), esotropia (median pre‐operative angle of deviation 27.5 prism dioptres, range 10–50), or a combination of these misalignments. Among the six patients with combined horizontal and vertical strabismus, the vertical component was the most prominent part in five. All patients had pre‐operatively diplopia; however, one of the patients could work with single vision due to strong vertical and horizontal prism glasses. 3.2 Surgical procedure All first‐time operations were unilateral recessions, on the inferior rectus ( n = 21), the medial rectus ( n = 13) or the superior rectus ( n = 2), respectively. All surgical procedures were carried out by an experienced strabismus surgeon (OHH or AECM). There were no intraoperative complications, in particular, no triggering of the oculo‐cardiac reflex. On the first post‐operative day, four patients had corneal erosion, probably due to a loose eye dressing. They all healed on standard treatment with antibiotic ointment. In one patient, the suture knot (surgery on the inferior rectus) eroded through the conjunctiva and had to be trimmed a couple of weeks after the initial surgery. 3.3 Post‐operative alignment and diplopia Binocular single vision and a substantial improvement of the ocular alignment were present in many cases already on the first post‐operative day (Figure ). At the 3‐month post‐operative examination, 16 (44.4%) had binocular single vision in primary position and down‐gaze without prisms. In addition, nine (25.0%) had binocular single vision with a small prism correction. Eleven (30.6%) still had diplopia and needed further surgery. However, two of these were originally planned as a two‐step surgery due to misalignment both in the horizontal and the vertical plane, and they should not be registered as reoperations. Among the other nine patients with diplopia at the post‐operative control examination, four had undercorrections and four overcorrections, while one got a marked increase of hypotropia after surgery for esotropia (Table ). All the overcorrections were found after recessions of the inferior rectus. 3.4 Additional surgeries Both patients who were operated on as a planned two‐step surgery underwent a successful single second surgery and were diplopia‐free in the primary position and down‐gaze at the last control examination. Concerning the other nine patients with persistent diplopia at the 3 months' post‐operative control, the details of their individual treatment and follow‐up are presented in Table . Among the 25 patients who were diplopia‐free at the 3‐months' control, either without or with prisms, there were three patients (pt# 22, 32 and 36) who experienced a re‐occurrence of strabismus/diplopia a long time (10–56 months) after the initial strabismus surgery. In one of these cases (#22), this was due to a marked worsening of exophthalmos, making a bilateral medial decompression necessary (no initial decompression), which in turn caused a large esotropia and right hypotropia. This patient needed four additional strabismus surgeries, all in topical anaesthesia, which eventually made her free of diplopia. The two other cases were esotropias that after long‐term stability gradually increased, making additional surgery necessary, after which they again became diplopia‐free without prisms. Clinical details at the 3‐months post‐operative examination, as well as the need for reoperations, are given separately for the patients with predominantly vertical and horizontal deviations in Tables and , respectively. At the last follow‐up examination, 32/36 (88.9%) were diplopia‐free, either without (19/36 or 52.8%) or with weak prisms (13/36 or 36.1%). Patient characteristics Pre‐operative clinical characteristics of the patients in the study group are given in Table . As shown, most of the patients had hypotropia (median pre‐operative angle of deviation 27.5 prism dioptres, range 8–45), esotropia (median pre‐operative angle of deviation 27.5 prism dioptres, range 10–50), or a combination of these misalignments. Among the six patients with combined horizontal and vertical strabismus, the vertical component was the most prominent part in five. All patients had pre‐operatively diplopia; however, one of the patients could work with single vision due to strong vertical and horizontal prism glasses. Surgical procedure All first‐time operations were unilateral recessions, on the inferior rectus ( n = 21), the medial rectus ( n = 13) or the superior rectus ( n = 2), respectively. All surgical procedures were carried out by an experienced strabismus surgeon (OHH or AECM). There were no intraoperative complications, in particular, no triggering of the oculo‐cardiac reflex. On the first post‐operative day, four patients had corneal erosion, probably due to a loose eye dressing. They all healed on standard treatment with antibiotic ointment. In one patient, the suture knot (surgery on the inferior rectus) eroded through the conjunctiva and had to be trimmed a couple of weeks after the initial surgery. Post‐operative alignment and diplopia Binocular single vision and a substantial improvement of the ocular alignment were present in many cases already on the first post‐operative day (Figure ). At the 3‐month post‐operative examination, 16 (44.4%) had binocular single vision in primary position and down‐gaze without prisms. In addition, nine (25.0%) had binocular single vision with a small prism correction. Eleven (30.6%) still had diplopia and needed further surgery. However, two of these were originally planned as a two‐step surgery due to misalignment both in the horizontal and the vertical plane, and they should not be registered as reoperations. Among the other nine patients with diplopia at the post‐operative control examination, four had undercorrections and four overcorrections, while one got a marked increase of hypotropia after surgery for esotropia (Table ). All the overcorrections were found after recessions of the inferior rectus. Additional surgeries Both patients who were operated on as a planned two‐step surgery underwent a successful single second surgery and were diplopia‐free in the primary position and down‐gaze at the last control examination. Concerning the other nine patients with persistent diplopia at the 3 months' post‐operative control, the details of their individual treatment and follow‐up are presented in Table . Among the 25 patients who were diplopia‐free at the 3‐months' control, either without or with prisms, there were three patients (pt# 22, 32 and 36) who experienced a re‐occurrence of strabismus/diplopia a long time (10–56 months) after the initial strabismus surgery. In one of these cases (#22), this was due to a marked worsening of exophthalmos, making a bilateral medial decompression necessary (no initial decompression), which in turn caused a large esotropia and right hypotropia. This patient needed four additional strabismus surgeries, all in topical anaesthesia, which eventually made her free of diplopia. The two other cases were esotropias that after long‐term stability gradually increased, making additional surgery necessary, after which they again became diplopia‐free without prisms. Clinical details at the 3‐months post‐operative examination, as well as the need for reoperations, are given separately for the patients with predominantly vertical and horizontal deviations in Tables and , respectively. At the last follow‐up examination, 32/36 (88.9%) were diplopia‐free, either without (19/36 or 52.8%) or with weak prisms (13/36 or 36.1%). DISCUSSION In the present study, 82% of the patients with GO referred to our department with strabismus could be operated with topical anaesthesia with intraoperative suture adjustment. As many of the GO patients are of senior age, it is a great advantage to avoid general anaesthesia. There were no intraoperative complications, especially no triggering of the oculo‐cardiac reflex. As demonstrated in Figure , a gradual increase in the effect of surgery was seen from the first post‐operative day to the 3‐month control examination. This was especially seen in the vertical strabismus group, as all the four overcorrections were found in this group (Table and Figure ). These patients underwent reoperations with the advancement of the inferior rectus, and in two cases it had to be repeated. Overcorrections after recession on the inferior rectus in GO strabismus patients, especially when using adjustable sutures, is a well‐known problem. The frequency of such overcorrections varies from 20% to 50% (Sprunger & Helveston , Cormack et al. , Volpe et al. , Barker et al. ). In the majority of these studies, post‐operative adjustment of the sutures is performed on the same day or the day after surgery. We find it encouraging that using intraoperative suture adjustment resulted in a slightly lower rate of inferior rectus overcorrections (4/21, 19%). We acknowledge that all Graves' patients are not suitable for strabismus surgery in topical anaesthesia. This may be due to a general fear of surgery on the eye while awake. In addition, in cases with extremely large deviations, either horizontally or vertically, bilateral surgery is often needed, and for these patients, we choose to operate in general anaesthesia. In our patient material, there were three such patients. Previous experience with topical anaesthesia in GO strabismus patients has led us to only operate on one extraocular muscle in one sitting. In the present patient material, most of the cases had either (or predominantly) a horizontal or a vertical deviation. For instance, some of the esotropia patients had a smaller hypotropia component and vice versa. In these cases, we always operate (recessions) on the predominant deviation. In many cases, this surgical procedure also diminishes or even straightens the smaller combined deviation. However, as pointed out above, post‐operative prisms or even additional surgery may be necessary, and it is important to inform the patients about this possibility prior to surgery. In cases with equally or near equally large deviations both horizontally and vertically, one should plan for a two‐step surgery, which should be explained to the patient in the beginning. In our material, we had two patients in this category. In spite of a selection of the patient material towards more complicated cases, it is encouraging to conclude that almost 90% of our strabismus cases with GO operated with topical anaesthesia and intraoperative suture adjustment were diplopia‐free in primary position or down‐gaze at the last control examination, either with or without weak prism corrections. Boulakh et al. have recently reported on strabismus surgery in GO using topical anaesthesia. Their surgical technique is somewhat different from ours, as they perform a preliminary reattachment 4 mm behind the insertion and a bow‐tie adjustable suture through the original insertion, with adjustment shortly after the surgical procedure. They reported a success (absence of diplopia in primary position and down‐gaze without prisms) rate of 95%, which is substantially better than in our study. However, they had a shorter follow‐up time (only 6 weeks), and there was no information about previous orbital decompression surgeries. In our patient material, 56% had been operated on with orbital decompression surgery, in most cases with bilateral medial decompression, which is known to seriously aggravate the ocular misalignment (Gulati et al. ). In addition, more than 30% of the patients were referred from other regions of the country due to high complexity of the strabismus condition. Taken together, our patient material had clearly a substantial selection towards more difficult cases. In GO, there is no uniform consensus on success criteria after strabismus surgery. As diplopia is one of the most bothersome complaints in GO, several authors have defined the absence of diplopia as the main success criterium, especially in the important primary and downgaze/reading positions, or generally improvement of the field of binocular single vision (Gilbert et al. , Nassar et al. ). Others have argued for more strict criteria including the use of a quality‐of‐life questionnaire (Jellema et al. ). Thus, the success rate varies substantially in different studies. We believe that thorough and honest information to the patient before surgery is of particular importance in these difficult strabismus conditions, in order to give the patient realistic expectations concerning the post‐operative results. As our study has a retrospective design, it has some weaknesses and limitations. We did not record quantitatively the pre‐ and post‐operative field of single binocular vision on Goldmann perimeter, as suggested by Jellema et al. ; nor did we do any quality‐of‐life measurements. Concerning topical anaesthesia, we did not perform any formal scoring of the patient's experience with this procedure, as Boulakh et al., have done. Our impression, however, through observing and communicating with the patient during the surgical procedure and at the control examination on the first post‐operative day, is that the procedure in nearly all cases was well tolerated. CONCLUSION The strabismus in GO is one of the most challenging conditions of ocular misalignment, as the ocular motility often is severely restricted, especially after orbital decompression surgery. Surgery in topical anaesthesia with intraoperative adjustment seems to be a safe, suitable, and well‐tolerated procedure, also in complicated cases, as in this series. With this technique the risk of general anaesthesia in older and often multi‐morbid patients can also be avoided. Prior to surgery, the patients should be carefully informed about the possibility of reoperations and post‐operative need for prism glasses. None of the authors have any conflicts of interest. |
Evolving oncology care for older adults: Trends in telemedicine use after one year of caring for older adults with cancer during COVID-19 | 1410b721-7ac0-4b94-b323-b7615225121e | 10264234 | Internal Medicine[mh] | Introduction Challenges of receiving follow-up care with healthcare providers (HCP) existed before the onset of the COVID-19 pandemic due to time constraints, appointments during working hours, commuting distances, patient physical limitations, costs, and transportation. Telemedicine was initiated before this pandemic to serve patients with chronic illnesses facing constraints that influence healthcare follow-up. Telemedicine delivers health and health-related services via telecommunications and digital communication technologies. Telemedicine technologies commonly used are live video conferencing, mobile health apps, “store and forward” electronic transmission, and remote patient monitoring. Caring for older adults with cancer is often complicated and has become more so with the onset of COVID-19. In May 2022, there were 522 million confirmed cases and over six million deaths globally due to COVID-19 . Older adults with cancer who acquire COVID-19 have a higher risk of death from the disease. This has generated a challenge for safe healthcare delivery among older adults, who historically have been a marginalized group. Since the start of the pandemic crisis, telemedicine has been used as an essential tool to meet the needs and challenges of safely delivering health care services remotely, at lower non-medical costs, with the hope of equivalent quality as in-person visits. However, the experiences of providers caring for older adults with cancer using telemedicine are unknown. Here, we present the results from two survey-based studies, spring of 2020 and summer of 2021 conducted, to explore this gap.
Methods In April 2020, members of the Advocacy Committee of the Cancer and Aging Research Group (CARG) and the Association of Community Cancer Centers (ACCC) developed a Qualtrics survey to gather data from direct care providers focused on caring for older adults with cancer during the COVID-19 pandemic. In the summer of 2021, a similar survey was launched by the same research team. The 2020 survey contained 20-items, including three open-ended questions, and the 2021 survey contained 25 items, four of which were open-ended questions. Qualitative and quantitative data from both surveys have already been published . The current paper reports findings related to telemedicine from both surveys. Questions specific to telemedicine (video only) on both surveys covered perceived barriers to the use of telemedicine in older adults with cancer. In the most recent survey, additional items were added, focusing on benefits associated with telemedicine use, changes in volume from before to during the pandemic, and the availability of guidelines to select patients for telemedicine vs. face-to-face appointments. Information about the provider's professional history (years in providing care to patients with cancer, percentage of older patients, medical profession/specialty, cancer program classification, setting, and state, if in the US, or country of residence, if outside the US) was collected. Potential participants were recruited by emails sent through professional organizations' listservs and email blasts (CARG, ACCC, Association of Oncology Social Work, Social Work Hospice and Palliative Care Network, International Society of Geriatric Oncology, European Cancer Organisation, Advanced Practitioner Society for Hematology and Oncology, Academy of Oncology Nurse & Patient Navigators, Geriatric Society of America, American College of Rehab Medicine, American Physical Therapy Association, and Los Angeles Oncology Nursing Society Chapter) as well as social media messaging (e.g., Facebook, Twitter). Individuals were eligible to participate if they: (1) provided care for people with cancer, (2) participated in the study voluntarily, and (3) understood that the results might be reported in multiple publications. The online survey for 2020 was available from April 10 to May 1, 2020, and the 2021 online survey was open from June 15 to September 2, 2021. The University of Cincinnati Institutional Review Board (IRB) approved both studies, and the University of Louisville IRB also approved the 2021 survey. The data were analyzed using descriptive statistics (frequencies, percentages) and chi-squares with IBM SPSS Statistics version 28.0.
Results 3.1 Participant Characteristics Spring 2020 Of the 495 online surveys that were opened, 274 (55.4%) respondents s met the eligibility criteria and completed the initial survey. Most respondents were social workers (42.7%), followed by physicians (24.6%), oncology nurses/navigators (8.8%), and advance practice providers (APPs; 4.0%). Just over 68% of the respondents reported that over 50% of their patients were aged over 65. The distribution by years of post-training practice was evenly split between 1 to over 20 years. The groups ranged from one to four years (20.5%) to over 20 years (28.9%). The vast majority were based in the US (92%). Thirty six percent reported working in a National Cancer Institute (NCI) affiliated academic setting, followed by 29% who practiced in community cancer programs. . 3.2 Participant Characteristics Summer 2021 Two hundred and thirty-five respondents started the survey, with 137 (58.3%) meeting the inclusion criteria and completing the survey. Most respondents were physicians (35.7%), followed by social workers (29.5%), APPs (12.5%), and oncology nurses/navigators (10.7%). The majority were affiliated with NCI-affiliated academic settings (58.2%), followed by community cancer programs (26.4%). Seventy-two percent of the respondents reported that over 50% of their patients were over 65. The length of professional practice (post-training years) working with individuals with cancer was evenly distributed between 1 and over 20 years, with groups ranging from one to four years (22.7%) to over 20 years (24.5%). Most respondents (65%) were based in the US. . 3.3 Telemedicine Use Almost 29% of study participants reported using telemedicine to meet with patients before COVID. This rose to 80.6% during COVID. Of those who reported using telemedicine during COVID, 18.4% had a lower volume than before COVID, with 32.7% reporting the volume was the same, 22.4% reporting a slightly higher volume, and 26.5% reporting a significant increase in volume. Only 33.1% reported having institutional guidelines for when to use telemedicine with a patient; 41.8% reported having no such guidelines and 24.6% reported not knowing if there were guidelines. The most commonly reported benefits of telemedicine use were less need for transportation (82.5%), patient safety (79.6%), availability of caregivers to attend appointments (68.6%), and healthcare worker safety (67.2%). The remaining benefits were the ease of scheduling (46.0%), healthcare provider convenience (39.4%), and increased patient confidence in using telemedicine (29.2%). Chi-square tests were used to explore differences in identified benefits by percent of patients over 65 (50% or fewer vs. over 50%), years in practice (1–10 years vs. over 10 years), and type of program (comprehensive vs. other). There was no significant finding by percent of patients over 65 or by years of practice. There were associations found with more benefits identified by those who work at comprehensive cancer settings as compared to other types of programs for benefits related to patient safety (χ 2 = 9.040, p < .01), patient transportation (χ 2 = 7.830, p < .01), caregiver ability to attend virtual appointments (χ 2 = 16.739, p < .001), and ease of scheduling (χ 2 = 8.669, p < .01). The 2020 survey's top reported barriers to telemedicine use were patient access to needed technology, patient technology challenges, patients' strong desire for face-to-face appointments, institutional infrastructure, healthcare worker technology challenges, and patient safety. The top barriers reported in the 2021 survey were accessibility issues (e.g., visual and auditory acuity) for patients, patient technology challenges, patient access to the needed technology, patients' strong desire for face-to-face appointments, patient safety, healthcare worker technology challenges, healthcare worker preference/policy, healthcare worker home-work issues, and uncertainty about reimbursement . In the 2020 survey, 1.8% of respondents endorsed no barriers; this increased to 26.1% in the 2021 survey. Chi-square analysis was used to explore differences in identified barriers by percent of patients aged over 65 (50% or fewer vs. over 50%), years of practice (1–10 years vs. over 10), and type of program (NCI-affiliated vs. other). There were no associations in the barriers by years in practice or type of program. There were statistically significant associations related to barriers to patient access to needed technology in both 2020 and 2021. Those respondents with over 50% of their patients aged over 65 reported patient access to necessary technology as a barrier more often than those with 50% or fewer patients over 65 in both surveys (2020, χ 2 = 6.264, p < .05: 2021, χ 2 = 7.085, p < .01). In 2021 more barriers were identified related to visual and hearing acuity (not asked in 2020) among those with over 50% of patients aged over 65 (χ 2 = 7.085, p < .05).
Participant Characteristics Spring 2020 Of the 495 online surveys that were opened, 274 (55.4%) respondents s met the eligibility criteria and completed the initial survey. Most respondents were social workers (42.7%), followed by physicians (24.6%), oncology nurses/navigators (8.8%), and advance practice providers (APPs; 4.0%). Just over 68% of the respondents reported that over 50% of their patients were aged over 65. The distribution by years of post-training practice was evenly split between 1 to over 20 years. The groups ranged from one to four years (20.5%) to over 20 years (28.9%). The vast majority were based in the US (92%). Thirty six percent reported working in a National Cancer Institute (NCI) affiliated academic setting, followed by 29% who practiced in community cancer programs. .
Participant Characteristics Summer 2021 Two hundred and thirty-five respondents started the survey, with 137 (58.3%) meeting the inclusion criteria and completing the survey. Most respondents were physicians (35.7%), followed by social workers (29.5%), APPs (12.5%), and oncology nurses/navigators (10.7%). The majority were affiliated with NCI-affiliated academic settings (58.2%), followed by community cancer programs (26.4%). Seventy-two percent of the respondents reported that over 50% of their patients were over 65. The length of professional practice (post-training years) working with individuals with cancer was evenly distributed between 1 and over 20 years, with groups ranging from one to four years (22.7%) to over 20 years (24.5%). Most respondents (65%) were based in the US. .
Telemedicine Use Almost 29% of study participants reported using telemedicine to meet with patients before COVID. This rose to 80.6% during COVID. Of those who reported using telemedicine during COVID, 18.4% had a lower volume than before COVID, with 32.7% reporting the volume was the same, 22.4% reporting a slightly higher volume, and 26.5% reporting a significant increase in volume. Only 33.1% reported having institutional guidelines for when to use telemedicine with a patient; 41.8% reported having no such guidelines and 24.6% reported not knowing if there were guidelines. The most commonly reported benefits of telemedicine use were less need for transportation (82.5%), patient safety (79.6%), availability of caregivers to attend appointments (68.6%), and healthcare worker safety (67.2%). The remaining benefits were the ease of scheduling (46.0%), healthcare provider convenience (39.4%), and increased patient confidence in using telemedicine (29.2%). Chi-square tests were used to explore differences in identified benefits by percent of patients over 65 (50% or fewer vs. over 50%), years in practice (1–10 years vs. over 10 years), and type of program (comprehensive vs. other). There was no significant finding by percent of patients over 65 or by years of practice. There were associations found with more benefits identified by those who work at comprehensive cancer settings as compared to other types of programs for benefits related to patient safety (χ 2 = 9.040, p < .01), patient transportation (χ 2 = 7.830, p < .01), caregiver ability to attend virtual appointments (χ 2 = 16.739, p < .001), and ease of scheduling (χ 2 = 8.669, p < .01). The 2020 survey's top reported barriers to telemedicine use were patient access to needed technology, patient technology challenges, patients' strong desire for face-to-face appointments, institutional infrastructure, healthcare worker technology challenges, and patient safety. The top barriers reported in the 2021 survey were accessibility issues (e.g., visual and auditory acuity) for patients, patient technology challenges, patient access to the needed technology, patients' strong desire for face-to-face appointments, patient safety, healthcare worker technology challenges, healthcare worker preference/policy, healthcare worker home-work issues, and uncertainty about reimbursement . In the 2020 survey, 1.8% of respondents endorsed no barriers; this increased to 26.1% in the 2021 survey. Chi-square analysis was used to explore differences in identified barriers by percent of patients aged over 65 (50% or fewer vs. over 50%), years of practice (1–10 years vs. over 10), and type of program (NCI-affiliated vs. other). There were no associations in the barriers by years in practice or type of program. There were statistically significant associations related to barriers to patient access to needed technology in both 2020 and 2021. Those respondents with over 50% of their patients aged over 65 reported patient access to necessary technology as a barrier more often than those with 50% or fewer patients over 65 in both surveys (2020, χ 2 = 6.264, p < .05: 2021, χ 2 = 7.085, p < .01). In 2021 more barriers were identified related to visual and hearing acuity (not asked in 2020) among those with over 50% of patients aged over 65 (χ 2 = 7.085, p < .05).
Discussion Our study focuses on the evolution in telemedicine (with video only) related to the COVID pandemic at two points in time, and how telemedicine experiences changed from the beginning of the pandemic to more than one and a half years into the pandemic. Most literature on telemedicine use in older adults with cancer focuses on the patient, family caregiver, and HCP level of satisfaction since the onset of COVID. Our study is unique because it explored benefits and barriers to telemedicine from HCPs who were using this modality with people diagnosed with cancer. Further, by examining HCP perceived use, benefits, barriers, and changes to telehealth over time, this study helps to clarify what next steps are needed and the ways to support and improve telehealth use for both providers and their patients. Differences in the identification of benefits of using telehealth with older adults with cancer were found by HCPs more often by those who worked at comprehensive cancer centers (i.e., NCI designated, tertiary referral, and specialist cancer centers) as compared to those who worked in other cancer care settings. Differences were found between those HCP who reported more than 50% of their patients being aged over 65 as compared to those having 50% or fewer patients aged over 65 related to patients having access to necessary technology or visual and hearing acuity. Neither of these findings have been reported previously. Arem et al. explored experiences of HCPs and adult patients using telemedicine during COVID-19 in a cross-sectional study. They similarly reported technology challenges faced by HCPs and patients. In their study, HCPs and patients reported reimbursement and access issues. These were captured in our second survey but not the first. Similar to our study, Arem et al. found that the benefits of telemedicine included both patient and HCP safety and having family caregivers present during the appointments. A concern for both providers and patients was that the provider would “miss something,” which validated patient safety as a top challenge for HCPs during COVID in our study. Limitations of our studies include the small sample size, most of the respondents being from the USA, respondents possibly being different between survey years, and a potential selection bias which limits generalizability of findings to all providers' experiences. In conclusion, the COVID-19 pandemic provided a unique opportunity for millions of patients, caregivers, and HCPs to experience telemedicine. Our study showed that HCPs acknowledge both benefits of having telemedicine as a method of healthcare delivery and barriers to telemedicine-based patient care. Future studies need to address these multifaceted barriers, such as no access to proper internet connectivity, difficulty using technology because of aging-related issues, and infrequent institutional guidelines on the proper context of delivering care via telemedicine. Furthermore, the equivalency of in-person and telemedicine visits should be tested. Only by effectively addressing barriers toward telemedicine can this platform remain a vital part of the healthcare system for older adults with cancer.
Study concepts: AS, JLKS, LCC, MP, BC, NMLB, JLKS, KB, EP, LMB. Study design: AS, KB, JLKS, EP, LMB. Data acquisition: EP, LMB. Quality control of data and algorithms: KB. Data analysis and interpretation: KB, MAVM, JLKS, LCC, MP, AK. Statistical analysis: KB. Manuscript preparation: MAVM, KB, Manuscript editing: All authors. Manuscript review: All authors.
NMLB and LMB reported relevant activities outside the submitted work: NMLB has served on advisory boards for Pfizer, Abbot, and Sanofi; received travel grants from Exact Sciences, Pfizer, and Lilly; and received speaker fees from Pfizer and AbbVie. LMB has served as a consultant for Pfizer, AstraZeneca, EMD Serono, and Merck.
|
Detection of hypophosphatasia in hospitalised adults in rheumatology and internal medicine departments: a multicentre study over 10 years | 7494f3dd-155f-4453-ae88-c70c8aa07cc9 | 11002352 | Internal Medicine[mh] | Hypophosphatasia is a rare and often undiagnosed disorder. Low alkaline phosphatase (ALP) values are overlooked by a majority of clinicians. In this multicentre study, low ALPs are poorly recognised by clinicians. 70.8% of patients treated with bisphosphonates never underwent ALP measurement before treatment initiation. Using a combination of multiple evocative symptoms to select patients for genetic testing seems interesting as a means of increasing the diagnosis rate and control healthcare costs. Mild to moderate adult hypophosphatasia may be more frequent than previously thought. Sensitisation of clinicians to ALP values is needed. ALP measurement should be mandatory in the secondary osteoporosis investigations before bisphosphonate treatment initiation. Hypophosphatasia (HPP) is a rare genetic skeletal disease due to an inherited metabolic disorder caused by mutations of the ALPL gene coding for tissue non-specific alkaline phosphatase (TNSALP). Prevalence of severe forms is estimated as ranging from 1/100 000 to 1/300 00, while prevalence of mild HPP was estimated at 1/6370 in Europe. Six forms of the disease have been defined: perinatal severe HPP, perinatal benign HPP, infantile HPP, childhood HPP, adult HPP and odontohypophosphatasia. In adults, clinical manifestations are dominated by fractures and joint disease. The most evocative fractures are localised at the metatarsals. These fractures are usually recurrent, with delayed consolidation potentially leading to pseudarthrosis. Other typical fractures affect the femoral diaphysis and occur mainly in the lateral cortex of the subtrochanteric region. Joint disease is represented mainly by calcium pyrophosphate deposition disease. While elevated ALP is usually taken into account by clinicians, low ALP levels are easily overlooked. A monocentre study in a tertiary care hospital in France found that notification was given in only 3% of cases. The aetiologies of low ALP are multiple and differ according to hypophosphatasaemia temporality. Furthermore, these causes are often unknown by clinicians. The aims of this study were to estimate the recognition of hypophosphatasaemia in rheumatology and internal medicine departments, to analyse the characteristics of the population presenting persistently low ALP measurements and to estimate the number of patients highly suspected of adult HPP. Secondary analyses were performed to compare patients with persistently low ALP measurements while using or not using bisphosphonates, and those with or without an identified cause of persistently low ALP levels. Study design This retrospective, descriptive and multicentre study included patients from the University hospitals of Poitiers, Nantes, Rennes, Brest, Angers and Tours. It consisted of the detection of low ALP measurement at least twice among patients hospitalised in the departments of Rheumatology and Internal Medicine between 1 July 2007 and 1 July 2017. No limit in duration between two measurements was necessary. We followed STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) instructions throughout this work. In France, internal medicine departments are departments encompassing a combination of geriatrics, clinical immunology, infectious diseases, oncohaematological diseases and rheumatology subspecialties dealing with systemic autoimmune and autoinflammatory disorders. Patients The listing of patients was established from records of the Biochemistry Department of several French university hospitals by laboratory database request on the criteria of low ALP values ≤35 U/L (normal range: 40–120 IU/L). Low cut-off values were identical for men and women in the laboratories that performed the analysis. A minimum of 2 low ALP values (≤35 IU/L) was required to minimise the likelihood of an analytical error; 35 U/L was defined as it is an average between the lower bounds of adult normal values and less exclusive than previous studies have set the limit at 30 UI/L. Patients who had previously denied or restricted access to their record for research purposes and aged less than 18 years were excluded from the study. Once authorisations were given, paper and electronic medical records were used to search patient history, symptoms, laboratory results (basic tests, calcium phosphate metabolism and specialised blood tests (bone ALP)), bone densitometry, X-rays, CT- scan and MRI results. If done, the genetic test was notified. Bone demineralisation was defined as a T-score <−2.5 SD. Chondrocalcinosis and hydroxyapatite deposition disease were diagnosed based on the aspect of the calcifications visualised on X-rays. Only non-traumatic fractures were considered in the analyses. Scoliosis was considered as present if described in the radiologist reports of spine imaging. Aetiologies for low ALP were defined as follows: Corticosteroids were considered as a possible cause of hypophosphatasaemia if patients received at least a very high dose of corticosteroids (>100 mg per day) using the standardised nomenclature for glucocorticoid dosages by Buttgereit et al . Severe anaemia was defined as haemoglobin (Hb) <60 g/L. Pernicious anaemia was considered if patients were currently not substituted. Hypothyroidism was considered if patients were not substituted and thyroid-stimulating hormone (TSH) was higher than 4 mUI/L. Hepatic insufficiency was considered if prothrombin time was lower than 50%. Hypervitaminosis D was considered if 25-OH vitamin D was higher than 150 ng/L. Hypomagnesaemia was considered if magnesium level was lower than 0.7 mmol/L. Vitamin C insufficiency was considered for levels lower than 2.5 mg/L. Zinc insufficiency was considered for levels lower than 9 µmol/L. Cushing disease, coeliac disease and Wilson disease were considered only if they were not currently being treated or at equilibrium. Intensive care stay was considered if it was an actual stay or less than 1 month before. Ongoing oncohaematological disease, bisphosphonate treatment, denosumab treatment, septicaemia, inflammatory disease flare and intravenous immunoglobulins were considered as potential causes of low ALP. To determine the number of patients for whom low ALP ≤35 U/L was recognised and noted in their records, the discharge summary, the diagnosis written in the letter and/or the ICD-10 (International Classification of Diseases 10th Revision) code were used. Patients were defined as possible HPP if they exhibited at least three symptoms evocative of HPP in addition to persistent low ALP (arthralgia, fractures, stress fractures, low bone density, dental abnormalities, chondrocalcinosis, scoliosis, high B 6 levels or high urinary phosphoethanolamine). Biochemical assays Both of the instruments measure ALP activity by a kinetic rate method in which a colourless organic phosphate ester substrate (nitrophenylphosphate) is hydrolysed by ALP to the yellow-coloured product p-nitrophenol and phosphate at pH of 10.3, thereby explaining the term ‘alkaline’. Changes in absorbance at 410 nm are directly proportional to the enzymatic activity of ALP. A requirement of two low ALP values (≤35 U/L) was set so as to minimise the likelihood of low ALP results due to analytic error. For each selected patient, all previous ALP values against time were visually examined to determine the temporal pattern of the qualifying serum ALP values and to separate two groups of patients. When the temporal pattern of ALP values indicated a precipitous fall from usually normal values, the patient was considered to have acute hypophosphatasaemia. Diagnostic conditions and circumstances associated with acute hypophosphatasaemia were analysed. Laboratory used Glims, JMP or DXLab software. When the temporal pattern of ALP values indicated a persistently low ALP or only 2 values, both of them under 35 U/L, the patient was considered to have persistent hypophosphatasaemia. More precise analysis was carried out to identify detailed patient history, symptoms, laboratory results (basic tests including calcium phosphate metabolism exploration and specialised blood tests such as bone ALP), bone densitometry, X-rays, CT scan and MRI results and genetic test when they had been made. To determine the number of patients in whom persistently low ALP ≤35 U/L was recognised, the discharge summary, the written diagnosis and/or the ICD-10 code (E833) were used. Statistical methodology Qualitative data were expressed as percentages and quantitative data as means±SD. Analysis was conducted using the Student’s t-test (or Wilcoxon, as appropriate) for quantitative data and χ² (or Fisher’s exact test, as appropriate) for qualitative data. A p value of 0.05 was considered as significant. Statistical analysis was performed by using SAS software, V.9.1 (SAS Institute) and GraphPad Prism (GraphPad Software, California). Patient and public involvement Patients or members of the public were not involved in the design, or conduct, or reporting, or dissemination plans of the research. This retrospective, descriptive and multicentre study included patients from the University hospitals of Poitiers, Nantes, Rennes, Brest, Angers and Tours. It consisted of the detection of low ALP measurement at least twice among patients hospitalised in the departments of Rheumatology and Internal Medicine between 1 July 2007 and 1 July 2017. No limit in duration between two measurements was necessary. We followed STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) instructions throughout this work. In France, internal medicine departments are departments encompassing a combination of geriatrics, clinical immunology, infectious diseases, oncohaematological diseases and rheumatology subspecialties dealing with systemic autoimmune and autoinflammatory disorders. The listing of patients was established from records of the Biochemistry Department of several French university hospitals by laboratory database request on the criteria of low ALP values ≤35 U/L (normal range: 40–120 IU/L). Low cut-off values were identical for men and women in the laboratories that performed the analysis. A minimum of 2 low ALP values (≤35 IU/L) was required to minimise the likelihood of an analytical error; 35 U/L was defined as it is an average between the lower bounds of adult normal values and less exclusive than previous studies have set the limit at 30 UI/L. Patients who had previously denied or restricted access to their record for research purposes and aged less than 18 years were excluded from the study. Once authorisations were given, paper and electronic medical records were used to search patient history, symptoms, laboratory results (basic tests, calcium phosphate metabolism and specialised blood tests (bone ALP)), bone densitometry, X-rays, CT- scan and MRI results. If done, the genetic test was notified. Bone demineralisation was defined as a T-score <−2.5 SD. Chondrocalcinosis and hydroxyapatite deposition disease were diagnosed based on the aspect of the calcifications visualised on X-rays. Only non-traumatic fractures were considered in the analyses. Scoliosis was considered as present if described in the radiologist reports of spine imaging. Aetiologies for low ALP were defined as follows: Corticosteroids were considered as a possible cause of hypophosphatasaemia if patients received at least a very high dose of corticosteroids (>100 mg per day) using the standardised nomenclature for glucocorticoid dosages by Buttgereit et al . Severe anaemia was defined as haemoglobin (Hb) <60 g/L. Pernicious anaemia was considered if patients were currently not substituted. Hypothyroidism was considered if patients were not substituted and thyroid-stimulating hormone (TSH) was higher than 4 mUI/L. Hepatic insufficiency was considered if prothrombin time was lower than 50%. Hypervitaminosis D was considered if 25-OH vitamin D was higher than 150 ng/L. Hypomagnesaemia was considered if magnesium level was lower than 0.7 mmol/L. Vitamin C insufficiency was considered for levels lower than 2.5 mg/L. Zinc insufficiency was considered for levels lower than 9 µmol/L. Cushing disease, coeliac disease and Wilson disease were considered only if they were not currently being treated or at equilibrium. Intensive care stay was considered if it was an actual stay or less than 1 month before. Ongoing oncohaematological disease, bisphosphonate treatment, denosumab treatment, septicaemia, inflammatory disease flare and intravenous immunoglobulins were considered as potential causes of low ALP. To determine the number of patients for whom low ALP ≤35 U/L was recognised and noted in their records, the discharge summary, the diagnosis written in the letter and/or the ICD-10 (International Classification of Diseases 10th Revision) code were used. Patients were defined as possible HPP if they exhibited at least three symptoms evocative of HPP in addition to persistent low ALP (arthralgia, fractures, stress fractures, low bone density, dental abnormalities, chondrocalcinosis, scoliosis, high B 6 levels or high urinary phosphoethanolamine). Both of the instruments measure ALP activity by a kinetic rate method in which a colourless organic phosphate ester substrate (nitrophenylphosphate) is hydrolysed by ALP to the yellow-coloured product p-nitrophenol and phosphate at pH of 10.3, thereby explaining the term ‘alkaline’. Changes in absorbance at 410 nm are directly proportional to the enzymatic activity of ALP. A requirement of two low ALP values (≤35 U/L) was set so as to minimise the likelihood of low ALP results due to analytic error. For each selected patient, all previous ALP values against time were visually examined to determine the temporal pattern of the qualifying serum ALP values and to separate two groups of patients. When the temporal pattern of ALP values indicated a precipitous fall from usually normal values, the patient was considered to have acute hypophosphatasaemia. Diagnostic conditions and circumstances associated with acute hypophosphatasaemia were analysed. Laboratory used Glims, JMP or DXLab software. When the temporal pattern of ALP values indicated a persistently low ALP or only 2 values, both of them under 35 U/L, the patient was considered to have persistent hypophosphatasaemia. More precise analysis was carried out to identify detailed patient history, symptoms, laboratory results (basic tests including calcium phosphate metabolism exploration and specialised blood tests such as bone ALP), bone densitometry, X-rays, CT scan and MRI results and genetic test when they had been made. To determine the number of patients in whom persistently low ALP ≤35 U/L was recognised, the discharge summary, the written diagnosis and/or the ICD-10 code (E833) were used. Qualitative data were expressed as percentages and quantitative data as means±SD. Analysis was conducted using the Student’s t-test (or Wilcoxon, as appropriate) for quantitative data and χ² (or Fisher’s exact test, as appropriate) for qualitative data. A p value of 0.05 was considered as significant. Statistical analysis was performed by using SAS software, V.9.1 (SAS Institute) and GraphPad Prism (GraphPad Software, California). Patients or members of the public were not involved in the design, or conduct, or reporting, or dissemination plans of the research. Population characteristics Between 1 July 2007 and 1 July 2017, 144 242 ALP measurements were performed; 56 321 hospitalised patients had at least 2 serum ALP measurements. Inclusion period differed according to the centres, with mean inclusion time of 8.42 years (±2.478). A total of 664 patients hospitalised in the rheumatology and internal medicine departments of the University Hospitals of Poitiers, Nantes, Rennes, Brest, Angers and Tours had at least two ALP values below or equal to 35 IU/L ( and ). There was a difference in the sex ratio with 57.8% of female patients (208/360) in internal medicine departments vs 66.8% in rheumatology departments (203/304) (p=0.017). Prevalence of all-cause hypophosphatasaemia was 1.18%. Among the patients, 182 (27.4 %) had persistently low serum ALP levels, representing a general prevalence of 0.32% for persistent hypophosphatasaemia (182/56321), while 482 patients (72,6%) had fluctuating serum ALP values, at least two of which were below or equal 35 IU/L, which representing prevalence of 0.86%. All in all, 38.1% of patients were male. In only 24 cases (3.61%) was hypophosphatasaemia reported in the patient’s records. Reasons for hospitalisations were various. In rheumatology departments, the top 10 reasons for hospitalisations were lumbosacral radiculopathy, haemopathy, osteoporosis, arthritis, fractures, rheumatoid arthritis, polyarthralgia, low back pain, spondyloarthritis and suspicion of rheumatic disease. In internal medicine departments, the top 10 reasons for hospitalisations were infectious disease, chronic myeloid leukaemia or lymphoma or multiple myeloma, polyarthralgia, severe anaemia/haemorrhage, autoimmune cytopenia, vasculitis, inflammatory myositis, intravenous immunoglobulin infusions, systemic lupus erythematosus, undernutrition or severe anorexia or hydro electrolytic disorders. Initial comparisons of characteristics of patients with transient versus persistent hypophosphatasaemia Clinical characteristics of the patients with transient and persistent hypophosphatasaemia were compared . Patients with persistent hypophosphatasaemia were younger (53.36 vs 62.93 years/old), were less heavy (64.01 vs 68.82 kg) and were more frequently treated in the rheumatology department (74.2% vs 35.1%). Their mean ALP values were significantly lower than in the transient group (28.0 vs 30.1 UI/L respectively). In terms of recognition, persistent hypophosphatasaemia was more frequently identified than transient hypophosphatasaemia (12.6% vs 0.2%). Among the 182 patients with persistent hypophosphatasaemia, 70 patients (38.4%) had no joint imaging and 49 patients (26.9%) had no spinal imaging. Only 12 patients in the transient hypophosphatasaemia group had peripheral joint X-rays and nine had spinal X-rays . Patients with persistent hypophosphatasaemia experienced pain more frequently (90.1% vs 22.8%). Stress fractures were present only in patients with persistent hypophosphatasaemia. As regards medical history, chondrocalcinosis, hydroxyapatite deposition disease, dental abnormalities, early tooth loss, childhood rickets, family HPP and convulsion were found only in patients with persistent hypophosphatasaemia. A total of 48 patients with persistent hypophosphatasaemia had a bone mineral density (BMD) measurement. However, values were not always available and sometimes only the conclusion appeared. Osteopenia was diagnosed in 20 patients, and osteoporosis in 15, while 13 patients presented with normal BMD. Details are found in . Only four patients in the transient hypophosphatasaemia group had a BMD measurement, with three normal BMD and one osteopenia. 10.1136/rmdopen-2024-004316.supp1 Supplementary data 10.1136/rmdopen-2024-004316.supp2 Supplementary data Further comparisons of aetiologies in patients with transient versus persistent hypophosphatasaemia Potential aetiologies of hypophosphatasaemia were compared . Considering possible causes of hypophosphatasaemia, severe anaemia, intensive care unit stay, active oncohaematological disease, ongoing bisphosphonate treatment, sepsis, inflammatory disease flare and intravenous immunoglobulin treatment were more frequently found in transient hypophosphatasaemia, while corticosteroid intake was more frequent in persistent hypophosphatasaemia. In those patients, HPP was possible in 69 patients, in 37 of whom there was no identified cause. Documentation of low ALP values in patients with persistent hypophosphatasaemia with bisphosphonate treatment Since bisphosphonate treatment is contraindicated in HPP, we analysed whether patients with low ALP measurements were tested before bisphosphonate initiation. Among the 24 patients treated with bisphosphonates, 19 (79.2%) had never undergone ALP measurement before treatment, while in 5 patients (20.8%), this treatment had been initiated despite an abnormal decrease of ALP. Details for those patients are in . Comparisons of clinical and radiological features of patients with persistent hypophosphatasaemia with and without identified cause Out of the 182 ‘persistent’ patients, 84 cases had an identified cause and 98 did not . There were no differences in ALP measurements between groups. Patients with unidentified cause of hypophosphatasaemia were more likely to have mechanical pain (70.5% vs 44.7%), diffuse pain (26.9% vs 15.3%) and knee chondrocalcinosis history (66.7% vs 11.1%), while they less frequently had pain in the limbs (28.2% vs 47.1%), fracture history (16.7% vs 29.9%), mixed pattern pain (10.3% vs 28.2%), low BMD (10.7% vs 37.1%) and radiographic vertebral fractures (10.7% vs 31.2%). HPP among patients with persistent hypophosphatasaemia Among all patients with persistent hypophosphatasaemia, 69 presented at least three symptoms evocative of HPP in addition to persistent low ALP and were classified as possible HPP . Among them, 18 underwent genetic analysis in search of ALPL gene mutation, and 11 patients presented with genetically proven HPP (61.1%). The diagnosis of genetic HPP was thereby confirmed in at least 1.7% of our total population (11/664). Among those 11 patients, 3 had another potential cause of low ALP (2 had taken corticosteroids, and 1 had a vitamin C deficiency). Selection of patients with persistently decreased ALP rendered genetic analysis more cost-effective, with a positive diagnosis ranging from at least 1.7% (11/664) to at least 6% (11/182), and even higher than 15.9% (11/69) if they were classified as possible HPP. Pyridoxal phosphate (PLP) measurements had been performed in only 6 patients out of the 664 patients included (mean±SD: 58.33±18.26 nmol/L (normal range: 30–100 nmol/L)). All of them had persistent low ALP: five were genetically tested, among whom three were positive Between 1 July 2007 and 1 July 2017, 144 242 ALP measurements were performed; 56 321 hospitalised patients had at least 2 serum ALP measurements. Inclusion period differed according to the centres, with mean inclusion time of 8.42 years (±2.478). A total of 664 patients hospitalised in the rheumatology and internal medicine departments of the University Hospitals of Poitiers, Nantes, Rennes, Brest, Angers and Tours had at least two ALP values below or equal to 35 IU/L ( and ). There was a difference in the sex ratio with 57.8% of female patients (208/360) in internal medicine departments vs 66.8% in rheumatology departments (203/304) (p=0.017). Prevalence of all-cause hypophosphatasaemia was 1.18%. Among the patients, 182 (27.4 %) had persistently low serum ALP levels, representing a general prevalence of 0.32% for persistent hypophosphatasaemia (182/56321), while 482 patients (72,6%) had fluctuating serum ALP values, at least two of which were below or equal 35 IU/L, which representing prevalence of 0.86%. All in all, 38.1% of patients were male. In only 24 cases (3.61%) was hypophosphatasaemia reported in the patient’s records. Reasons for hospitalisations were various. In rheumatology departments, the top 10 reasons for hospitalisations were lumbosacral radiculopathy, haemopathy, osteoporosis, arthritis, fractures, rheumatoid arthritis, polyarthralgia, low back pain, spondyloarthritis and suspicion of rheumatic disease. In internal medicine departments, the top 10 reasons for hospitalisations were infectious disease, chronic myeloid leukaemia or lymphoma or multiple myeloma, polyarthralgia, severe anaemia/haemorrhage, autoimmune cytopenia, vasculitis, inflammatory myositis, intravenous immunoglobulin infusions, systemic lupus erythematosus, undernutrition or severe anorexia or hydro electrolytic disorders. Clinical characteristics of the patients with transient and persistent hypophosphatasaemia were compared . Patients with persistent hypophosphatasaemia were younger (53.36 vs 62.93 years/old), were less heavy (64.01 vs 68.82 kg) and were more frequently treated in the rheumatology department (74.2% vs 35.1%). Their mean ALP values were significantly lower than in the transient group (28.0 vs 30.1 UI/L respectively). In terms of recognition, persistent hypophosphatasaemia was more frequently identified than transient hypophosphatasaemia (12.6% vs 0.2%). Among the 182 patients with persistent hypophosphatasaemia, 70 patients (38.4%) had no joint imaging and 49 patients (26.9%) had no spinal imaging. Only 12 patients in the transient hypophosphatasaemia group had peripheral joint X-rays and nine had spinal X-rays . Patients with persistent hypophosphatasaemia experienced pain more frequently (90.1% vs 22.8%). Stress fractures were present only in patients with persistent hypophosphatasaemia. As regards medical history, chondrocalcinosis, hydroxyapatite deposition disease, dental abnormalities, early tooth loss, childhood rickets, family HPP and convulsion were found only in patients with persistent hypophosphatasaemia. A total of 48 patients with persistent hypophosphatasaemia had a bone mineral density (BMD) measurement. However, values were not always available and sometimes only the conclusion appeared. Osteopenia was diagnosed in 20 patients, and osteoporosis in 15, while 13 patients presented with normal BMD. Details are found in . Only four patients in the transient hypophosphatasaemia group had a BMD measurement, with three normal BMD and one osteopenia. 10.1136/rmdopen-2024-004316.supp1 Supplementary data 10.1136/rmdopen-2024-004316.supp2 Supplementary data Potential aetiologies of hypophosphatasaemia were compared . Considering possible causes of hypophosphatasaemia, severe anaemia, intensive care unit stay, active oncohaematological disease, ongoing bisphosphonate treatment, sepsis, inflammatory disease flare and intravenous immunoglobulin treatment were more frequently found in transient hypophosphatasaemia, while corticosteroid intake was more frequent in persistent hypophosphatasaemia. In those patients, HPP was possible in 69 patients, in 37 of whom there was no identified cause. Since bisphosphonate treatment is contraindicated in HPP, we analysed whether patients with low ALP measurements were tested before bisphosphonate initiation. Among the 24 patients treated with bisphosphonates, 19 (79.2%) had never undergone ALP measurement before treatment, while in 5 patients (20.8%), this treatment had been initiated despite an abnormal decrease of ALP. Details for those patients are in . Out of the 182 ‘persistent’ patients, 84 cases had an identified cause and 98 did not . There were no differences in ALP measurements between groups. Patients with unidentified cause of hypophosphatasaemia were more likely to have mechanical pain (70.5% vs 44.7%), diffuse pain (26.9% vs 15.3%) and knee chondrocalcinosis history (66.7% vs 11.1%), while they less frequently had pain in the limbs (28.2% vs 47.1%), fracture history (16.7% vs 29.9%), mixed pattern pain (10.3% vs 28.2%), low BMD (10.7% vs 37.1%) and radiographic vertebral fractures (10.7% vs 31.2%). Among all patients with persistent hypophosphatasaemia, 69 presented at least three symptoms evocative of HPP in addition to persistent low ALP and were classified as possible HPP . Among them, 18 underwent genetic analysis in search of ALPL gene mutation, and 11 patients presented with genetically proven HPP (61.1%). The diagnosis of genetic HPP was thereby confirmed in at least 1.7% of our total population (11/664). Among those 11 patients, 3 had another potential cause of low ALP (2 had taken corticosteroids, and 1 had a vitamin C deficiency). Selection of patients with persistently decreased ALP rendered genetic analysis more cost-effective, with a positive diagnosis ranging from at least 1.7% (11/664) to at least 6% (11/182), and even higher than 15.9% (11/69) if they were classified as possible HPP. Pyridoxal phosphate (PLP) measurements had been performed in only 6 patients out of the 664 patients included (mean±SD: 58.33±18.26 nmol/L (normal range: 30–100 nmol/L)). All of them had persistent low ALP: five were genetically tested, among whom three were positive In our study, the prevalence of all-cause hypophosphatasaemia among patients hospitalised in the internal medicine and rheumatology departments was 1.18% while that of persistent hypophosphatasaemia was 0.32%. This proportion was higher than in the study by Maman et al in which 0.13% of hospitalised patients (every department except the emergency department) had persistently low values with a less stringent threshold of 40 UI/L, as in the study of Hepp et al , in which prevalence of 0.20% was found in adults admitted to an endocrinological outpatient clinic in Denmark, or in the study by García-Fontana et al with prevalence of 0.12% in a Spanish university hospital. A German study retrospectively analysing 6 918 126 subjects with a measurement of ALP between 2011 and 2016 in a single laboratory identified prevalence of ALP values below 30 of 8.46% and 9.47% between 30 and 40 UI/L, respectively, thereby underscoring the need to focus on persistent low ALP levels since transient hypophosphatasaemia is quite common. McKiernan et al identified 1.1% of patients with at least two values under 40 UI/L among consultants in a multidisciplinary centre, and 0.06% of ALP level persistently below 30 UI/L. This is concordant with a German study, which found 1.31% of patients treated in rheumatology at the University Hospital of Bonn from 2017 to 2019 showed persistently low serum ALP levels (<35 UI/L). As regards the proportion of patients with persistent hypophosphatasaemia, it was 33.3% in the study by McKiernan et al , and 39% in a study by Vieira et al , which is concordant with our result of 27.6%. Similarly, Feurstein et al found 5.5% of patients with at least one low ALP value under 40 UI/L, with only 13.9% of patients presenting persistent low ALP levels and musculoskeletal symptoms; they represented 0.8% of the whole population from a rheumatology outpatient clinic in Vienna specialised in rheumatology and rare bone diseases. In terms of notification, reporting of low ALP values was found in 3.61% in our population, which is close to the 3% noted by Maman et al . Low ALP is clearly not sufficiently recognised, even if rheumatologists seem to better identify this abnormality with a reported 6.91% vs 0.83% in internal medicine. As a result, adult HPP is highly underdiagnosed. A few years ago, some laboratories only indicated the ‘high’ cut-off and, in the absence of personal knowledge of the lower normal cut-off, the ALP drop was not always noticed, and therefore, easily overlooked. In our study, many patients had not been explored, and the final report never mentioned low ALP. Indeed, it ‘normal liver test’ was often noted without details, even though the ALP levels were lower than 35 IU/L. That is why hypophosphatasaemia was not coded and did not result in further explorations. The difficulty of diagnosing HPP led several teams to propose algorithms to enhance the rate of diagnosis. The first strategy is based on the adjunction of PLP measurement to ALP so as to better stratify the likelihood of HPP diagnosis with high PLP and low ALP as features of HPP. Another team added BMD measurement by Dual-energy X-ray absorptiometry to generate a strategy of rationalised mutational analysis in resource-limiting conditions. While these approaches are interesting, PLP is not performed in daily practice, thereby limiting its usefulness if low ALP has not been previously identified. In our study, PLP measurements had been performed in only 6 out of the 664 patients included. All of them had persistent low ALP; five were genetically tested, among whom three were positive. This lack of data is not surprising since low ALP is poorly recognised in daily practice. Therefore, physicians would not dose PLP since they did not take low ALP into account. Moreover, the algorithm mentioning PLP measurement in order to better screen patients was published after our inclusion period, which may be another, though less important, explanation. Another approach is to focus on populations with highly suggestive features of HPP. Tsiantouli et al analysed ALP values in a population of 72 patients with atypical femur fractures (AFF) with at least 1 ALP value available. There was no difference in the median value of ALP compared with the control group with hip fracture, and no difference in the titre of ALP if they were treated with antiresorptive agent. Moreover, none of the patients with AFF without antiresorptive drugs in this single-centre study presented with low ALP levels. Similarly, Marini et al performed ALPL genotyping in patients with AFF or other biochemical or clinical signs of adult HPP. This led them to identify three rare variants of ALPL (2.8%) in this population. Monozygotic ALPL common variants were found in 11.3% of the patients, with a higher proportion of 22% of patients with normal ALP values, 30.8% of patients with AFF, 16.7% of patients with normal ALP and high PLP levels and also, unfortunately, in 13.5% of non-HPP controls. Those results should draw the attention of clinicians to the need to carefully consider the possibility that some variants have no detrimental effect on the ALP protein and that different kinds of disease severity or carrying a nonpathogenic mutation can be encountered. Since metatarsal fractures are suggestive of HPP, Koehler et al focused on this population and found 0.12% prevalence of pathogenic ALPL variants in a population of 1611 metatarsal fractures, a proportion that rose to 15% when low ALP measurement was associated. In our study, the same approach using clinical, biological and imaging features identified 69 patients with at least three evocative symptoms of HPP in addition to persistently low ALP values (possible HPP) among which 11 were found to have genetically proven HPP, representing a diagnosis rate of at least 15.9%. This value is probably underestimated since only 18 patients benefited from genetic testing, corresponding to a diagnosis rate of 61.1% of the tested patients. The combination of at least three signs in addition to persistent hypophosphatasaemia should, therefore, be tested in a larger population to evaluate its cost-effectiveness. As expected, possible causes of hypophosphatasaemia were more frequently found in transient cases. Interestingly, corticosteroids were more frequently found in persistent hypophosphatasaemia. Since patients with persistent hypophosphatasaemia more frequently had crystal arthropathy history as well as pain, we may hypothesise that this difference is the result of its use to treat arthritis flares or pain as well as long-term treatments in systemic inflammatory disease. Indeed, patients with chronic low dose corticosteroid use were frequently treated with bisphosphonates in order to prevent corticosteroid-induced osteoporosis. Pain in itself is also an important point to consider insofar as more than 90% of patients with persistent hypophosphatasaemia in our study reported pain. Pain also represents the greatest burden in HPP patients, as shown in the global HPP registry. Moreover, there is multiple evidence in the literature that TNSALP exerts a role in the biosynthesis of adenosine, a key molecule with antinociceptive effect with TNSALP, prostatic acid phosphatase and ecto-5’-nucleotidase playing crucial roles in determining the overall sensitivity of the nociceptive circuits, as reviewed extensively by Street and Sowa. The relationship between tissue-nonspecific ALP and inflammation is an increasing source of interest. A recent review article by Graser et al affirms that TNSALP deficiency contributes to inflammatory reactions. TNSALP is implicated in the balance between proinflammatory ATP effects and anti-inflammatory effects of adenosine. Moreover, TNSALP’s ectophosphatase activity is involved in the modulation of TLR ligands like LPS and double-stranded RNA mimic poly-inosine:cytosine. TNSALP is also a T-cell activity modulator. In synthesis, TNSALP is now known to exert an anti-inflammatory effect. Furthermore, ALP levels are higher in case of systemic inflammation. In our study, the two main reasons in which inflammatory disease flare were associated with low ALP were corticosteroids, intravenous immunoglobulin treatments or bisphosphonate therapy for prevention of corticosteroid-induced osteoporosis. In Internal Medicine departments, patients were often hospitalised for sepsis or active oncohaematological disease, which can also explain low ALP. Concerning the patients with persistent hypophosphatasaemia, patients under bisphosphonate treatment were analysed separately to identify differentiating features. The observed differences all seem to be related to osteoporosis with frequent history of fractures, bone deformities, bone demineralisation and vertebral fractures. In terms of bone frailty, vertebral fractures were numerically less frequent in patients with persistent hypophosphatasaemia without identified cause. In the study by Hepp et al , none of the HPP patients had vertebral fractures. The study by Genest et al found a significant correlation between low ALP levels and high spine BMD in a cohort of HPP patients. In the literature review by Sadhukhan et al , vertebral fractures were not observed in HPP patients and high lumbar spine BMD was more likely. This study has some limitations. First of all, this study has a retrospective design which induced differences in the number of observations of the different parameters, and therefore, may wane some of our conclusions. There was a bias regarding the variability of the number of patients included per centre, and it did not allow us to be completely exhaustive. Indeed, 10 years of inclusion was not possible in all centres. The listing established through the cooperation with the laboratory technicians of each centre was not always complete and the extraction of data by the resident at each site was time-consuming. This difference can come from the diversity of software laboratories, which did not always lead to online results for a number of years. This required a longer duration of analysis with a risk of error and limitation of the data to a more restricted period. In addition, some centres underwent software changes during the inclusion period, generating difficulty in returning to previous data. Moreover, the time interval between the two measurements was variable, depending on each patient. The need for two measurements to limit the risk of analytical error may also be at the origin of a bias since body weight could be different at each time point. Since this condition is poorly recognised, genetic testing was performed in only a few cases in which physicians suspected HPP. In this study, long-term low-dose glucocorticoid treatment was not taken into account as a cause of low ALP levels which is another limitation. However, literature is scarce and conflicting about this point which led us not to consider it as a possible cause of low ALP levels. Indeed, in a study by LoCascio et al about 23 patients treated with 10–25 mg a day of glucocorticoid for various immune diseases, no significant decline of ALP levels were found after 1–2 months, 5–7 months or 12 months glucocorticoid treatment. Moreover, in 13 patients treated for chronic glomerulonephritis with a mean dose of 43.8 mg a day of GC progressively tapered, Sasaki et al demonstrated that ALP levels decreased significantly at 1, 3 and 6 months endpoints compared with baseline and bone-specific ALP levels decreased significantly only at 3 and 6 months of follow-up but each measurement remained in the normal range. Pearce et al showed that GC doses of 10 mg and less for polymyalgia rheumatica resulted in higher bone specific ALP levels during a 27-month follow-up. Finally, Korczowska et al showed that ALP levels increased significantly at 12 months after glucocorticoid initiation in patients with rheumatoid arthritis, while there was no difference in patients already treated before the beginning of the follow-up. Although this study could not be exhaustive (missing data, impossibility to carry out genetic research or to perform PLP measurements in all the patients suspected of HPP in a retrospective study), it has the advantage of inventorying our practice in view of improvement, and the multicentre character over 10 years reinforces our conclusions. The results suggest that moderate forms of genetic HPP in adults are certainly more frequent than previously thought and highlight the need for special attention to the value of ALP. The situation is further complicated by the hypothesis that some variants could lead to low density osteopathy, without HPP-disease criteria. Indeed, the presence of heterozygosity in some patients with suggestive symptoms suggests that other mechanisms are involved in the phenotypic expression of adult HPP. For very mild adult forms and exclusive dental forms, mutations may be heterozygous. In their proposal of genetic-based nosology of HPP, Mornet et al described a mild HPP form with adult onset of unspecific symptoms caused by an autosomal dominant haploinsufficiency with prevalence of 1/508. However, the existence of such an entity is still controversial. As a conclusion, hypophosphatasaemia was recognised only in 3.61% of the patients presenting this biological abnormality and hospitalised in rheumatology and internal medicine departments. At least 15.9% of patients with three or more evocative symptoms of HPP in addition to persistent hypophosphatasaemia had HPP. This multicentre retrospective study shows that adult HPP remains underdiagnosed. The prevalence of moderate forms of adult HPP appears to be higher than previously thought and highlights the need for according special attention to ALP values. |
Level V Metastases in Node-Positive Oral Squamous Cell Carcinoma: Beyond Level IIA and III | 25c43d98-e987-4013-a906-a82eae1f7e49 | 11887986 | Surgical Procedures, Operative[mh] | Surgical management of level V nodal basin in oral squamous cell carcinoma (OSCC) is controversial. Level V dissection has been associated with an increased post-operative shoulder morbidity due to excessive dissection around spinal accessory nerve. Although good quality evidence exists regarding preservation of shoulder function by avoiding level IIB dissection , addressing level V where the nerve courses through immediately after exiting level IIB is not well studied. The majority of regional metastases are confined to level I to III; especially level IB and IIA . Hence, there is an increasing trend of neck dissection comprising level I to IV in clinically node positive (cN+) OSCC . However, a large prospective trial from India has shown level IIA and III positivity to be predictors of level V metastases . Neck recurrences after initial treatment are usually advanced and seldom surgically salvageable . Moreover, lower neck nodal involvement affects the overall survival significantly . So, careful selection of patients is an absolute necessity before omitting level V dissection in cN+ OSCC. The present study attempted to identify predictors of level V metastases beyond level II and III nodal positivity with an aim to develop individualized recommendations regarding the extent of neck dissection in cN+ oral cancers.
Study Design and Participants A retrospective analysis from a prospectively maintained institutional database was done. The study was in accordance with the guidelines set by the Declaration of Helsinki and International Council for Harmonisation – Good Clinical Practice. The study protocol was reviewed and approved by the Institutional Review Board (IRB) and Institutional Ethics Committee (IEC) (EC/NEW/Inst/2022/UA/0180), Approval No.: AIIMS/IEC/23/402. Since this was a retrospective study conducted ensuring absolute confidentiality of patient details, informed consent was waived by the IEC. Patients with cN+ OSCC, as determined by clinical examination, and contrast-enhanced magnetic resonance imaging for tongue and floor of mouth (FOM) primaries and contrast-enhanced computed tomography for non-tongue/FOM primaries, who underwent surgery with comprehensive neck dissection (CND) (level I–V) from 1st April 2018 to 31st December 2022 were included. Patients who were operated on for residual or recurrent disease had a pathological N0 status, and no level-wise nodal description in final histopathology were excluded from the final analysis. Tumors were staged according to the AJCC 8th TNM staging system. A consistent protocol-based treatment strategy was followed during the study period. Bilateral neck dissections were performed for tumors crossing midline or clinical evidence of bilateral neck nodes. Data Definitions and Categorization Datasets included patient characteristics (age, sex, ECOG performance status, addiction to tobacco, and alcohol), clinical details (subsite, clinical T and N classification, neck dissection), and histopathological parameters (tumor volume, histological grade, depth of invasion, lymphovascular invasion, perineural invasion, worst pattern of invasion, bone invasion, lymph nodal yield (LNY), total number of positive lymph nodes, lymph node ratio [LNR], extra-nodal extension, level-wise lymph nodal distribution). LNR was defined as the number of positive nodes divided by LNY. The total number of positive lymph nodes and LNR was further categorized into three subgroups according to a previously published study . Statistical Analyses Results were analyzed using IBM SPSS for Windows, version 26.0 (Armonk, NY, USA). Descriptive statistics were expressed in mean (±SD), median (IQR), and proportions, as applicable. Clinical and pathological parameters were subjected to univariate ANOVA. Significant parameters in univariate analysis were further subjected to multivariate regression analysis. Post hoc ANOVA with Bonferroni correction was used for pairwise comparisons between subcategories of the total number of positive lymph nodes, LNR, and pN classification. A p value of less than 0.05 was considered statistically significant.
A retrospective analysis from a prospectively maintained institutional database was done. The study was in accordance with the guidelines set by the Declaration of Helsinki and International Council for Harmonisation – Good Clinical Practice. The study protocol was reviewed and approved by the Institutional Review Board (IRB) and Institutional Ethics Committee (IEC) (EC/NEW/Inst/2022/UA/0180), Approval No.: AIIMS/IEC/23/402. Since this was a retrospective study conducted ensuring absolute confidentiality of patient details, informed consent was waived by the IEC. Patients with cN+ OSCC, as determined by clinical examination, and contrast-enhanced magnetic resonance imaging for tongue and floor of mouth (FOM) primaries and contrast-enhanced computed tomography for non-tongue/FOM primaries, who underwent surgery with comprehensive neck dissection (CND) (level I–V) from 1st April 2018 to 31st December 2022 were included. Patients who were operated on for residual or recurrent disease had a pathological N0 status, and no level-wise nodal description in final histopathology were excluded from the final analysis. Tumors were staged according to the AJCC 8th TNM staging system. A consistent protocol-based treatment strategy was followed during the study period. Bilateral neck dissections were performed for tumors crossing midline or clinical evidence of bilateral neck nodes.
Datasets included patient characteristics (age, sex, ECOG performance status, addiction to tobacco, and alcohol), clinical details (subsite, clinical T and N classification, neck dissection), and histopathological parameters (tumor volume, histological grade, depth of invasion, lymphovascular invasion, perineural invasion, worst pattern of invasion, bone invasion, lymph nodal yield (LNY), total number of positive lymph nodes, lymph node ratio [LNR], extra-nodal extension, level-wise lymph nodal distribution). LNR was defined as the number of positive nodes divided by LNY. The total number of positive lymph nodes and LNR was further categorized into three subgroups according to a previously published study .
Results were analyzed using IBM SPSS for Windows, version 26.0 (Armonk, NY, USA). Descriptive statistics were expressed in mean (±SD), median (IQR), and proportions, as applicable. Clinical and pathological parameters were subjected to univariate ANOVA. Significant parameters in univariate analysis were further subjected to multivariate regression analysis. Post hoc ANOVA with Bonferroni correction was used for pairwise comparisons between subcategories of the total number of positive lymph nodes, LNR, and pN classification. A p value of less than 0.05 was considered statistically significant.
Four hundred thirty-eight ( n = 438) patients with cN+ OSCC underwent surgery during the given period. Among these, 151 patients with cN1 neck underwent selective neck dissection (SND) (level I–III and level I–IV), and 37 patients were operated on for residual or recurrent disease. Final histopathology confirmed pN0 status for 103 patients, and level-wise nodal description was absent for 5 patients. One hundred and forty-two ( n = 142) patients were included in the final analysis. Thirteen ( n = 13) patients underwent bilateral CND resulting in 155 analyzable neck dissection specimens . The mean age of the study population was 46.4 (±11.5) years. Majority of the patients were males, had addiction to tobacco and alcohol, primaries involving buccal mucosa and retromolar trigone, and presented at a locally advanced stage (cT3/4, cN2/3). The median LNY was 42 (29.5) . The basic characteristics are highlighted in . In our study population, 15 out of 142 (10.6%) pN+ and 15 out of 245 (6.1%) cN+ OSCC had evidence level V metastases. A total of 40% ( n = 6) of them had oral tongue and FOM primaries, followed by lower alveolus ( n = 4, 26.7%). None of cN1 or pN1 patients had a level V metastasis. No skip or isolated metastasis to level V was noticed. A total of 86.7% ( n = 13) and 66.7% ( n = 10) of level V metastases had associated level II and III metastases, respectively . Tumor volume, histological grade, total number of positive lymph nodes, LNR, LNY, the presence of extra-nodal extension (ENE), pN classification, and the presence of level II and III metastases were significant factors associated with level V metastases in univariate analysis. In multivariate analysis, total number of positive lymph nodes, LNR, ENE, pN classification, and the presence of level II and III metastases were found to be significant predictors for level V metastases . The post hoc analysis suggested that ≥5 positive nodes, LNR >0.1, and pN3 status were independent risk factors for level V metastases .
Involvement of neck nodes is considered the single most important prognostic factor in patients with head and neck squamous cell carcinomas . An appropriate elective neck dissection is crucial for improved regional control and survival in these patients . Though CND has been the standard of care for cN+ OSCC, it is also associated with increased shoulder morbidity due to excessive tissue handling around spinal accessory nerve, particularly during level V dissection. A study on the distribution of regional metastases in OSCC by Shah et al. laid the foundation for the researchers to explore less radical approaches for the neck . Subsequently, a series of studies were published favoring feasibility of SND in cN+ OSCC . SND offered an equivalent regional control and survival rates when compared to CND in selected cN+ patients. As the treatment of cancer is becoming more individualized over time, patient-centric predictive algorithms or nomograms are slowly being inducted into clinical practice rather than generalized guideline-based treatment. The incidence of level V metastases in OSCC is less than 5% and is always associated with metastases in preceding nodal stations . Hence, whether to address level V during neck dissection requires a more individualized approach rather than overall cN status. In a large prospective study, Pantvaidya et al. identified metastases at level IIA and III as predictors of level V metastases and recommended CND in this subgroup of node positive oral cancer patients . Moreover, factors predicting overall lymph nodal metastases in OSCC do not predict level V metastases . In another prospective study, Maharaj et al. showed ENE at other levels was the only predictor of level V metastases in multivariate analysis . Level V metastases were present in 6.1% of cN+ and 10.6% of pN+ necks in the present study cohort. The higher incidence can be explained by exclusion of 151 cN1 patients who underwent SND from the final analysis as none of the cN1 ( n = 25) or pN1 ( n = 36) patients in this study showed level V metastases. If the above subset of patients along with the findings is included in the final analysis, the incidence in our study would have dropped to 3.8% (15 out of 396) as reported in the literature . The study by Parikh et al. also reported less than 1% incidence of level V metastases in cN1 OSCC . Two-thirds of our patients with level V metastases had primaries in tongue, FOM, and lower alveolus, though none of the primary sites attained statistical significance. Level V metastases were always associated with metastases in any of the preceding nodal levels in the present study as well. The regional failure rate after SND in N+ OSCC, though reported to be statistically equivalent to CND, varied from 11% to 20% . Nodal relapses are notorious for advanced stage of presentation and ENE , further emphasizing the importance of careful selection of patients for SND in N+ neck . In our study, ≥5 positive nodes, LNR >0.1, ENE at any nodal level, pN3 status, and level II and III metastases were found to be independent predictors of level V metastases in multivariate analysis. Liang et al. in his systematic review and meta-analysis has not shown any significant difference in regional recurrence, disease-specific and overall mortality between SND and CND . But the results of this study must be interpreted with caution because of the following reasons. First, a significant weightage of the pooled estimate has been fetched from a single study . Second, this highest weighted study did not include N2c and N3 in its cohort. Third, the length of follow-up, inclusion criteria (cN classification), and the extent of SND were inconsistent across the included studies. The strength of our study lies in the fact that a single study provides sufficient information about the predictors of level V metastases; enough to aid the surgeon tailoring the extent of neck dissection in N+ oral cancer patients. The major limitation is that all the independent predictors of level V metastases are pathological, restricting its feasibility in the clinical setting. Being retrospective in nature, the authors could not verify the precision of clinical data and eliminate inter-observer variability. However, all the significant predictive factors can be reliably extrapolated to clinically bulky nodal disease, multiple palpable nodes, extra-nodal extension, and the presence of suspicious nodes at level II and/or III. Though clinicopathological parameters may vary between primary subsites , overall treatment principle remains the same. The present study can generate level-2b evidence which is, at best, hypothesis generating. Prospective randomized trials comparing SND and CND in cN+ oral cancer stratified according to cN classification, though would have been ideal to generate level-1 evidence, may not be practically feasible due to ethical concerns. The only registered randomized trial in this regard (CTRI/2017/09/009920) is currently underway and does not include N2c and N3 subset of the patients. In the absence of such trials, the results of the present study should serve as a guide to customize extent of neck dissection in cN+ OSCC.
The authors thank Dr. Amit Kumar, Department of Otolaryngology – Head and Neck Surgery, All India Institute of Medical Sciences, New Delhi, India, for providing language help and proofreading the article.
This study protocol was reviewed and approved by the Institutional Review Board and Institutional Ethics Committee (DHR Reg. No. EC/NEW/Inst/2022/UA/0180), Approval No. AIIMS/IEC/23/402. Since this was a retrospective institutional chart review ensuring absolute confidentiality of the patient details, informed consent was waived by the IEC.
The authors have no conflicts of interest to declare.
This study was not supported by any sponsor or funder.
K.S.M.: conceptualization, data curation, formal analysis, methodology, project administration, resources, software, supervision, visualization, and roles/writing – original draft, review, and editing. V.S.K.: conceptualization, data curation, formal analysis, visualization, and roles/writing – original draft. A.V.: data curation, resources, visualization, and roles/writing – original draft. T.A., A.P., and A.U.: data curation, methodology, visualization, and roles/writing – original draft. P.K., D.D.M., and A.S.: data curation, visualization, and roles/writing – review and editing. A.M., R.P., N.R., and N.S.D.: data curation and resources. A.B. and M.P.: methodology, resources, supervision, visualization, and roles/writing – review and editing. M.M.: project administration, resources, supervision, and roles/writing – review and editing.
|
An outbreak of Legionnaires’ disease linked to a municipal and industrial wastewater treatment plant, The Netherlands, September–October 2022 | e6a08a7e-d801-428d-bcf1-4a15a97a0886 | 11100293 | Microbiology[mh] | Legionnaires’ disease (LD) is a bacterial infection mostly caused by Legionella pneumophila species. The disease is characterised by pneumonia, often requires hospitalisation and in the Netherlands has a case fatality of ca 5% . Legionella pneumophila is divided into 16 serogroups, among which L. pneumophila serogroup 1 (sg1) is responsible for ca 90% of diagnosed cases in Europe . The incubation period is usually 2–10 days and rarely exceeds 14 days. Legionella bacteria are ubiquitous in the natural environment and can sometimes grow rapidly in man-made water systems. They can cause infection when inhaled after aerosolisation. Although the majority of LD cases are sporadic, outbreaks are commonly reported, most often related to wet cooling towers, building water systems and spa pools . Wastewater treatment plants (WWTPs) have increasingly been identified as a source in outbreaks of LD, but their role in sporadic LD is probably still underestimated . Wastewater treatment plants with biological treatment systems may have an ideal temperature for Legionella growth, and the availability of oxygen and organic nitrogen can further enhance the proliferation of Legionella . The aerobic treatment process generates aerosols that may contain Legionella that are spread to the environment. It is generally believed that industrial WWTPs (iWWTP) are more likely sources of infections than the traditional municipal WWTPs (mWWTP) due to their higher operating temperatures, often 30–38 °C, combined with nutrient-rich wastewater. Nonetheless, Legionella is prevalent in both industrial and municipal WWTPs as documented in several studies . In the Netherlands, an industrial biological WWTP was identified as a common source for two clusters of LD cases in 2016 and 2017, and another iWWTP as a source for cases from 2013 to 2018 . Since then, there have been multiple smaller clusters that were linked to WWTPs .
In the period 19–28 September 2022, five cases of LD were reported to the Municipal Health Service (MHS) region of Utrecht; all were residents of the town of Houten, which has 46,970 inhabitants. Because no cases of LD had been reported in the previous 5 years among Houten residents and none of the five cases reported a likely source of exposure to aerosols, an outbreak investigation was initiated on 30 September, with the aim to identify the source of the outbreak and implement control measures. The team included epidemiologists, medical doctors in communicable disease control, an infection control specialist, environmental and public health policy advisors and microbiologists from the MHS region of Utrecht, the National Institute for Public Health and the Environment (RIVM), environmental authorities and the national reference laboratory for Legionella (NRLL). We describe here the epidemiological and environmental investigations that followed, including patient interviews, typing of clinical isolates, environmental sampling and modelling that together helped identify the most likely source of infection.
Surveillance Legionnaires’ disease is a notifiable disease in the Netherlands. For a detailed description of the surveillance system, we refer to a previous publication . In short, all diagnosed LD cases are reported to the MHS, who report the case to the national level (RIVM) via an online notification system. Communicable disease nurses of the MHS interview all cases using a standardised questionnaire on possible sources of aerosol exposure in the 2 weeks before disease onset and add this exposure information to the notification system. The list of potential sources includes e.g. travel, stay in a hospital or healthcare facility, visits to risk locations such as wellness facilities and pools, occupational exposure and activities such as gardening. In addition, exposure to wet cooling towers and wastewater treatment plants are considered for local clusters or outbreaks. Medical microbiological laboratories send clinical isolates to the NRLL, and a selection of potential environmental sources is sampled . Sampling and typing of clinical and environmental isolates is done by the NRLL. Case definition and finding For this outbreak, a confirmed LD case was defined as a patient with pneumonia and microbiological confirmation according to the European probable or confirmed LD case definition and with symptom onset on or after 1 September 2022, living in the town of Houten or within 5 km of Houten, or who had visited Houten within the incubation period, without other likely sources. A patient with only a single high titre for L . pneumophila sg1–6 was defined as a suspected case, and paired samples would be required to classify the case as probable. To increase case finding, the MHS informed general practitioners in Houten about the LD increase through a digital letter on 4 October, encouraging them to perform diagnostics in patients with LD-like symptoms. Epidemiological investigations For this outbreak, the MHS re-interviewed the cases, collecting information on recent movements in- and outside Houten (e.g. cycling, hiking, shopping). Furthermore, 6-digit postal codes of identified cases were entered in the LD-GIS-tool from the European Centre for Disease Prevention and Control (ECDC) ( https://legionnaires.ecdc.europa.eu/gistool ) to calculate a disease risk map based on the case density and population density for 2 km, 5 km and 10 km distance . This information was used to generate hypotheses on the location of the infection source. Environmental investigations To identify the source of this outbreak, possible locations for inspection and environmental sampling were identified in a radius of 5 km in or around Houten. To find registered wet cooling towers and WWTPs, we consulted the local environmental authority, examined the Atlas Living Environment maps which contain registered wet cooling towers , and visually inspected satellite images from Google Maps. We also reviewed the source finding interviews to identify possible common exposures. The Legionella Source Identification Unit from the NRLL took environmental samples from each of the possible source locations. Moreover, we consulted the environmental authority on locations with recent changes in operating procedures and technical failures that could have facilitated Legionella proliferation. Microbiological investigations All collected environmental and clinical isolates were genotyped using sequence-based typing and compared with the European Working Group for Legionella Infections sequence-based typing database . To increase typing resolution, molecular serogroups, multilocus sequence typing (MLST) sequence types (STs) and 1,521 locus cgMLST complex types (CTs) were calculated in Ridom Seqsphere+ software v7.7.5 by automated allele submission to the Legionella pneumophila cgMLST server ( https://www.cgmlst.org/ncs/schema/schema/1025099 ) . The allelic profiles were used to calculate distance matrices using a Hamming distance, ignoring pairwise missing loci. We extracted DNA from cultured isolates using a robotic system MagCore extractor system H16 with a MagCore Viral extraction kit (RBC Bioscience). Sequencing libraries were prepared using the NextEra XT library prep kit (Illumina) and then run on miniSEQ Illumina platform using a 150 bp paired-end sequencing Mid output Kit v2 (Illumina). The acceptance criteria were set as percentage good targets > 90% and average coverage (assembled) > 30. Ridom SeqSphere+ was used to convert the cgMLST scheme developed by Moran-Gilad et al. . The allelic profile output was used to create minimum spanning trees (NJ tree) that were based on 1,535 core genes including the seven household genes for sequence-based typing of MLST and 1,521 cgMLST. The cgMLST results of randomly selected human (n = 8) and environmental isolates (n = 1) sampled in 2020 and 2021 were added to the minimum spanning tree for context. Statistical analyses Transmission of Legionella has been described over a long distance up to 12 km, and in a previous WWTP-associated outbreak in the Netherlands, transmission occurred over a distance of at least 3 km, with an increased attack rate up to 6 km distance . Furthermore, the outcome of the Legionnaires GIS toolkit showed similar results for the 10 km and 5 km distance models. Therefore, we used both 10 km and 5 km distances in our models. We assumed that exposure most probably took place at the residential address, where most time is spent, as previously described . Firstly, we used a spatial source identification model that has been described previously . In short, it divides the area in a spatial grid and keeps the grid cells within a specified radius of a case. Each centre point of a cell is a potential source, with a number of cases assigned to them. The model fits an exponential decay function to the incidence–distance data for each cell. If this fit is significant at a 95% confidence level, the grid cell is retained as a potential source and a normalised measure of risk (nMR) is calculated. The measure of risk is the integral of a probability of illness function, which considers a baseline infectivity and distance (decay). This means that it takes into account the number of cases but also the population density, as well as the distance. This measure is normalised for comparison. The nMR has a value between 0 and 1, where a value closer to 1 indicates a more likely source. We assigned LD cases to a square location based on the postal code of their residential address. We first ran the model for a 1 × 1 km grid and a 10 km search radius, and then repeated the procedure for a 500 × 500 m grid with a 5 km search radius. Secondly, we assessed whether upwind direction was correlated with the direction of each potential source location. We calculated the bearing (i.e. measure of direction, expressed here as degree of angle) between each potential source location and the cases’ residential address. For example, a bearing of 45 degrees means the potential source is north-east of the cases’ residence. We compared this bearing with the wind direction during the patients’ incubation period (2–10 days before disease onset) and calculated the difference in degrees between them. For example, when a potential source location was at 30 degrees from the cases’ residential address and the wind direction was 20 degrees, the difference would be 10 degrees. By chance a random distribution of 90 degrees difference would be expected. A two-tailed Student’s t-test was performed to determine whether this difference was significantly lower than expected by chance. The analyses were repeated weighting for wind velocity (Beaufort scale, with higher weight with increasing scale) and day of the incubation period (weight for day 2 to day 10: 0.048, 0.077, 0.125, 0.173, 0.202, 0.125, 0.125, 0.087, 0.038) . We also performed the same analyses with the locations most reported as visited by cases instead of their residential address. Data on daily average wind direction in degrees and velocity (m/s) were obtained from a weather station located at ca 8.5 km of Houten via the Royal the Netherlands Meteorological Institute (KNMI, www.knmi.nl ). Data on the distribution of the incubation period were obtained from a large LD outbreak in Melbourne, Australia . All analyses were performed in RStudio v2022.07.2. The R package TrackReconstuction version 1.3 and the weighted Student’s t-test was performed using R package weights version 1.0.4. We used the CalcBearing function in the TrackReconstruction package to obtain the radian between a given initial latitude and longitude (potential source location) and ending latitude and longitude (residential address of a case) in decimal degrees. Radians were converted to degrees by multiplying by 180/π. Statistical significance was set to a p value of < 0.05.
Legionnaires’ disease is a notifiable disease in the Netherlands. For a detailed description of the surveillance system, we refer to a previous publication . In short, all diagnosed LD cases are reported to the MHS, who report the case to the national level (RIVM) via an online notification system. Communicable disease nurses of the MHS interview all cases using a standardised questionnaire on possible sources of aerosol exposure in the 2 weeks before disease onset and add this exposure information to the notification system. The list of potential sources includes e.g. travel, stay in a hospital or healthcare facility, visits to risk locations such as wellness facilities and pools, occupational exposure and activities such as gardening. In addition, exposure to wet cooling towers and wastewater treatment plants are considered for local clusters or outbreaks. Medical microbiological laboratories send clinical isolates to the NRLL, and a selection of potential environmental sources is sampled . Sampling and typing of clinical and environmental isolates is done by the NRLL.
For this outbreak, a confirmed LD case was defined as a patient with pneumonia and microbiological confirmation according to the European probable or confirmed LD case definition and with symptom onset on or after 1 September 2022, living in the town of Houten or within 5 km of Houten, or who had visited Houten within the incubation period, without other likely sources. A patient with only a single high titre for L . pneumophila sg1–6 was defined as a suspected case, and paired samples would be required to classify the case as probable. To increase case finding, the MHS informed general practitioners in Houten about the LD increase through a digital letter on 4 October, encouraging them to perform diagnostics in patients with LD-like symptoms.
For this outbreak, the MHS re-interviewed the cases, collecting information on recent movements in- and outside Houten (e.g. cycling, hiking, shopping). Furthermore, 6-digit postal codes of identified cases were entered in the LD-GIS-tool from the European Centre for Disease Prevention and Control (ECDC) ( https://legionnaires.ecdc.europa.eu/gistool ) to calculate a disease risk map based on the case density and population density for 2 km, 5 km and 10 km distance . This information was used to generate hypotheses on the location of the infection source.
To identify the source of this outbreak, possible locations for inspection and environmental sampling were identified in a radius of 5 km in or around Houten. To find registered wet cooling towers and WWTPs, we consulted the local environmental authority, examined the Atlas Living Environment maps which contain registered wet cooling towers , and visually inspected satellite images from Google Maps. We also reviewed the source finding interviews to identify possible common exposures. The Legionella Source Identification Unit from the NRLL took environmental samples from each of the possible source locations. Moreover, we consulted the environmental authority on locations with recent changes in operating procedures and technical failures that could have facilitated Legionella proliferation.
All collected environmental and clinical isolates were genotyped using sequence-based typing and compared with the European Working Group for Legionella Infections sequence-based typing database . To increase typing resolution, molecular serogroups, multilocus sequence typing (MLST) sequence types (STs) and 1,521 locus cgMLST complex types (CTs) were calculated in Ridom Seqsphere+ software v7.7.5 by automated allele submission to the Legionella pneumophila cgMLST server ( https://www.cgmlst.org/ncs/schema/schema/1025099 ) . The allelic profiles were used to calculate distance matrices using a Hamming distance, ignoring pairwise missing loci. We extracted DNA from cultured isolates using a robotic system MagCore extractor system H16 with a MagCore Viral extraction kit (RBC Bioscience). Sequencing libraries were prepared using the NextEra XT library prep kit (Illumina) and then run on miniSEQ Illumina platform using a 150 bp paired-end sequencing Mid output Kit v2 (Illumina). The acceptance criteria were set as percentage good targets > 90% and average coverage (assembled) > 30. Ridom SeqSphere+ was used to convert the cgMLST scheme developed by Moran-Gilad et al. . The allelic profile output was used to create minimum spanning trees (NJ tree) that were based on 1,535 core genes including the seven household genes for sequence-based typing of MLST and 1,521 cgMLST. The cgMLST results of randomly selected human (n = 8) and environmental isolates (n = 1) sampled in 2020 and 2021 were added to the minimum spanning tree for context.
Transmission of Legionella has been described over a long distance up to 12 km, and in a previous WWTP-associated outbreak in the Netherlands, transmission occurred over a distance of at least 3 km, with an increased attack rate up to 6 km distance . Furthermore, the outcome of the Legionnaires GIS toolkit showed similar results for the 10 km and 5 km distance models. Therefore, we used both 10 km and 5 km distances in our models. We assumed that exposure most probably took place at the residential address, where most time is spent, as previously described . Firstly, we used a spatial source identification model that has been described previously . In short, it divides the area in a spatial grid and keeps the grid cells within a specified radius of a case. Each centre point of a cell is a potential source, with a number of cases assigned to them. The model fits an exponential decay function to the incidence–distance data for each cell. If this fit is significant at a 95% confidence level, the grid cell is retained as a potential source and a normalised measure of risk (nMR) is calculated. The measure of risk is the integral of a probability of illness function, which considers a baseline infectivity and distance (decay). This means that it takes into account the number of cases but also the population density, as well as the distance. This measure is normalised for comparison. The nMR has a value between 0 and 1, where a value closer to 1 indicates a more likely source. We assigned LD cases to a square location based on the postal code of their residential address. We first ran the model for a 1 × 1 km grid and a 10 km search radius, and then repeated the procedure for a 500 × 500 m grid with a 5 km search radius. Secondly, we assessed whether upwind direction was correlated with the direction of each potential source location. We calculated the bearing (i.e. measure of direction, expressed here as degree of angle) between each potential source location and the cases’ residential address. For example, a bearing of 45 degrees means the potential source is north-east of the cases’ residence. We compared this bearing with the wind direction during the patients’ incubation period (2–10 days before disease onset) and calculated the difference in degrees between them. For example, when a potential source location was at 30 degrees from the cases’ residential address and the wind direction was 20 degrees, the difference would be 10 degrees. By chance a random distribution of 90 degrees difference would be expected. A two-tailed Student’s t-test was performed to determine whether this difference was significantly lower than expected by chance. The analyses were repeated weighting for wind velocity (Beaufort scale, with higher weight with increasing scale) and day of the incubation period (weight for day 2 to day 10: 0.048, 0.077, 0.125, 0.173, 0.202, 0.125, 0.125, 0.087, 0.038) . We also performed the same analyses with the locations most reported as visited by cases instead of their residential address. Data on daily average wind direction in degrees and velocity (m/s) were obtained from a weather station located at ca 8.5 km of Houten via the Royal the Netherlands Meteorological Institute (KNMI, www.knmi.nl ). Data on the distribution of the incubation period were obtained from a large LD outbreak in Melbourne, Australia . All analyses were performed in RStudio v2022.07.2. The R package TrackReconstuction version 1.3 and the weighted Student’s t-test was performed using R package weights version 1.0.4. We used the CalcBearing function in the TrackReconstruction package to obtain the radian between a given initial latitude and longitude (potential source location) and ending latitude and longitude (residential address of a case) in decimal degrees. Radians were converted to degrees by multiplying by 180/π. Statistical significance was set to a p value of < 0.05.
Descriptive epidemiology In total, 15 cases were identified, of whom 14 were confirmed and one was suspected. Disease onset ranged from 13 September to 23 October 2022 . Nine cases were female and six were male, with a median age of 65 years (range: 41–79 years). Nine patients had underlying health conditions, one case currently smoked, and for three information on risk factors was not given. All cases except one were admitted to hospital, one of them to the intensive care unit. No deaths were reported. Two cases reported travel abroad: for one of them, travel was considered an unlikely source because the case had travelled more than 10 days before disease onset and stayed in Houten during the entire 10-day incubation period. The other case had travelled abroad during the 4 days before onset of symptoms, and infection abroad could not be excluded. The only common exposure identified from case interviews was buying groceries at the same shopping mall (nine of 15 cases); six of the nine cases frequented the same supermarket, which had a mist system. Only one case reported a well-known LD risk exposure, which was a swimming pool. The only two patients not living in Houten reported to have visited Houten during their incubation period. Both cases lived within a proximity of 5 km of Houten and had visited the shopping mall that was also most often reported by other cases. Thirteen cases tested positive in the urine antigen test, of whom three were culture-positive in sputum. Two cases tested negative in the urine antigen test, but one of them tested positive in PCR on bronchoalveolar lavage and the other was single IgM antibody-positive for L. pneumophilia serogroup 1–6. Possible source locations that were identified included a fountain, a demonstration of the fire brigade on 10 September 2022, the mist system in a supermarket where many cases purchased their groceries, a waste-processing company using water dispersion to minimise dust formation, an iWWTP and a mWWTP. No wet cooling towers were identified in or around Houten. The outcome of the LD-Gis tool indicated that the source was most likely to be located in the south or south-western region of Houten, corresponding to the locations of the waste-processing company, the iWWTP and the mWWTP. Environmental and microbiological investigations Clinical isolates were available for three cases and typed as L. pneumophila serogroup 1 ST82 in two cases and L. pneumophila serogroup 1 ST42 in one case. The two case isolates with ST82 were identical to each other based on cgMLST results, and closely related to two of four non-outbreak ST82 patient isolates that were included in the cgMLST analysis for context, with one and two alleles difference, respectively . No Legionella was detected in the environmental samples taken from 4 to 7 October 2022 from the fountain, fire brigade, waste-processing company and the supermarket mist system, making them less likely sources of infection. Samples taken at the iWWTP and mWWTP on 4 and 7 October 2022, respectively, tested positive for L. pneumophila , with high concentrations between 2,000 and 20,000,000 colony-forming units (cfu)/L . Samples from both locations were typed as L. pneumophila sg1, ST2678, which did not match with the typing of the clinical isolates. However, these isolates were identical to each other based on cgMLST. Both WWTPs also tested positive for L. pneumophila sg6 and in two samples from the mWWTP, L. pneumophila serogroup 1, ST42 was detected. The latter matched the typing results of one LD case, marking the mWWTP as a likely source of infection. This was corroborated by the cgMLST results which showed that the ST42 strains from the patient and the mWWTP were identical. Furthermore, five non-outbreak ST42 patient isolates had an allelic distance of five to 73 alleles with the outbreak ST42 isolates. Statistical models We used a spatial source identification model to determine whether the LD incidence decreased with an increasing distance from the centre of each possible source location . We included all 15 cases in the analyses because they lived within a 5 km radius of Houten. The fire brigade demonstration that was held on 10 September was excluded as possible source location because it could not explain cases that occurred after the maximum incubation period was exceeded. Based on these results, the most likely source was located south-west of Houten, where there were three putative source locations, namely the iWWTP, the mWWTP and the waste-processing company. We used a wind direction model to assess whether any of the possible source locations were in line with the wind direction during the incubation period of the patients ( and ). The predominant wind direction during the outbreak was south/south-west/west. The mWWTP was most in line with the upwind direction (71.0° difference, p < 0.001), followed by the mall (80.1°, p = 0.018). When weighing for wind speed and incubation period, this difference was even smaller for the mWWTP (65.9°, p < 0.001) and remained the same for the mall (79.8°, p = 0.012). The maximum distance from any of the residential addresses of patients to the mWWTP was 4,8 km and 6,1 km to the iWWTP. However, the maximum distance from any of the residential addresses to either of the two WWTP was 3.0 km. The maximum distance to the mall was 5.6 km. We performed the same analyses using the mall as the location of exposure instead of the cases’ residential address, as this was the only commonly reported visited location. However, the results were all non-significant.
In total, 15 cases were identified, of whom 14 were confirmed and one was suspected. Disease onset ranged from 13 September to 23 October 2022 . Nine cases were female and six were male, with a median age of 65 years (range: 41–79 years). Nine patients had underlying health conditions, one case currently smoked, and for three information on risk factors was not given. All cases except one were admitted to hospital, one of them to the intensive care unit. No deaths were reported. Two cases reported travel abroad: for one of them, travel was considered an unlikely source because the case had travelled more than 10 days before disease onset and stayed in Houten during the entire 10-day incubation period. The other case had travelled abroad during the 4 days before onset of symptoms, and infection abroad could not be excluded. The only common exposure identified from case interviews was buying groceries at the same shopping mall (nine of 15 cases); six of the nine cases frequented the same supermarket, which had a mist system. Only one case reported a well-known LD risk exposure, which was a swimming pool. The only two patients not living in Houten reported to have visited Houten during their incubation period. Both cases lived within a proximity of 5 km of Houten and had visited the shopping mall that was also most often reported by other cases. Thirteen cases tested positive in the urine antigen test, of whom three were culture-positive in sputum. Two cases tested negative in the urine antigen test, but one of them tested positive in PCR on bronchoalveolar lavage and the other was single IgM antibody-positive for L. pneumophilia serogroup 1–6. Possible source locations that were identified included a fountain, a demonstration of the fire brigade on 10 September 2022, the mist system in a supermarket where many cases purchased their groceries, a waste-processing company using water dispersion to minimise dust formation, an iWWTP and a mWWTP. No wet cooling towers were identified in or around Houten. The outcome of the LD-Gis tool indicated that the source was most likely to be located in the south or south-western region of Houten, corresponding to the locations of the waste-processing company, the iWWTP and the mWWTP.
Clinical isolates were available for three cases and typed as L. pneumophila serogroup 1 ST82 in two cases and L. pneumophila serogroup 1 ST42 in one case. The two case isolates with ST82 were identical to each other based on cgMLST results, and closely related to two of four non-outbreak ST82 patient isolates that were included in the cgMLST analysis for context, with one and two alleles difference, respectively . No Legionella was detected in the environmental samples taken from 4 to 7 October 2022 from the fountain, fire brigade, waste-processing company and the supermarket mist system, making them less likely sources of infection. Samples taken at the iWWTP and mWWTP on 4 and 7 October 2022, respectively, tested positive for L. pneumophila , with high concentrations between 2,000 and 20,000,000 colony-forming units (cfu)/L . Samples from both locations were typed as L. pneumophila sg1, ST2678, which did not match with the typing of the clinical isolates. However, these isolates were identical to each other based on cgMLST. Both WWTPs also tested positive for L. pneumophila sg6 and in two samples from the mWWTP, L. pneumophila serogroup 1, ST42 was detected. The latter matched the typing results of one LD case, marking the mWWTP as a likely source of infection. This was corroborated by the cgMLST results which showed that the ST42 strains from the patient and the mWWTP were identical. Furthermore, five non-outbreak ST42 patient isolates had an allelic distance of five to 73 alleles with the outbreak ST42 isolates.
We used a spatial source identification model to determine whether the LD incidence decreased with an increasing distance from the centre of each possible source location . We included all 15 cases in the analyses because they lived within a 5 km radius of Houten. The fire brigade demonstration that was held on 10 September was excluded as possible source location because it could not explain cases that occurred after the maximum incubation period was exceeded. Based on these results, the most likely source was located south-west of Houten, where there were three putative source locations, namely the iWWTP, the mWWTP and the waste-processing company. We used a wind direction model to assess whether any of the possible source locations were in line with the wind direction during the incubation period of the patients ( and ). The predominant wind direction during the outbreak was south/south-west/west. The mWWTP was most in line with the upwind direction (71.0° difference, p < 0.001), followed by the mall (80.1°, p = 0.018). When weighing for wind speed and incubation period, this difference was even smaller for the mWWTP (65.9°, p < 0.001) and remained the same for the mall (79.8°, p = 0.012). The maximum distance from any of the residential addresses of patients to the mWWTP was 4,8 km and 6,1 km to the iWWTP. However, the maximum distance from any of the residential addresses to either of the two WWTP was 3.0 km. The maximum distance to the mall was 5.6 km. We performed the same analyses using the mall as the location of exposure instead of the cases’ residential address, as this was the only commonly reported visited location. However, the results were all non-significant.
Based on the elevated concentration of Legionella at the iWWTP, as well as the first results of the spatial source identification model that pointed towards the iWWTP as the most likely source of infection, control measures were taken. Moreover, the environmental agency reported that the wastewater treatment process of the iWWTP had added an anaerobic treatment step to the wastewater treatment process 1 year before the outbreak, which increases the temperature of the wastewater, from ambient temperature to 30–38 °C. The aeration tank at the iWWTP was shut down on 14 October 2022 in order to prevent aerosol production that would facilitate the spread of Legionella . From 7 November 2022 onwards, effluent was treated with ultraviolet (UV) light to kill microorganisms and reduce the discharge of Legionella from the iWWTP to the mWWTP. The aeration tank of the mWWTP could not be shut down because of the oxygen requirement of the microorganisms that break down the wastewater pollutants. Because the mWWTP discharges into the Amsterdam–Rhine canal, this could potentially lead to contamination with wastewater pollutants. However, visitors were no longer allowed to enter the mWWTP and employees were required to wear a facemask. To prevent Legionella transmission, the aeration tank of the mWWTP was partially covered at the end of November 2023, but full coverage was not possible for practical and financial reasons. No further cases were observed after the aeration tank of the iWWTP had been shut down on 14 October 2022 and the maximum incubation period of 14 days was exceeded. This aeration tank remained shut down until the tank could be fully covered on 7 April 2023.
We describe here a multidisciplinary outbreak investigation that led to the identification and elimination of a lesser-known source of Legionella infection. Microbiological results and statistical modelling suggested the iWWTP and mWWTP as potential sources. The outbreak came to a direct halt when measures were taken at the iWWTP, suggesting that at least one of the plants, but probably both, were the infection source in this outbreak. This study shows the added value of whole genome sequencing to discriminate between outbreak isolates, especially for STs that are common in the environment. Indeed, it has been increasingly used in Legionella outbreak investigations in the past decade . Based on sequence typing and cgMLST, one in three cases with available clinical isolates matched with an isolate from the mWWTP that tested positive for L. pneumophila sg1 ST42 but did not match an isolate from the iWWTP. Outbreaks with multiple ST types are relatively common in Legionella outbreaks, and often, only some of the ST types that are present environmental sources can be confirmed . A possible explanation could be that the causative, possibly more virulent, strain is present in low concentrations, while other strains are abundantly present in the WWTP. Hence, the latter are more likely to be detected, as has been reported previously . Furthermore, multiple STs may form a mixed culture, and picking from such cultures may lead to isolation of only one of those STs. Repeated sampling and typing may therefore be required to find the outbreak strain in the source. Although the patients with ST82 were epidemiologically linked the outbreak, they could not be microbiologically linked to a source. Interestingly, our patient isolates were also closely related by cgMLST to ST82 isolates from patients (n = 3) and the environment (n = 1) not connected to this outbreak. Indeed, previous studies reporting on the genomic population structure of Legionella isolates have made similar observations for some ST types, indicating that isolates may be genetically closely related but not epidemiologically linked . Of interest, on 18 and 25 October 2022, one effluent and one aeration tank sample taken by third parties at the iWWTP tested positive for L. pneumophila sg1 and sg2, with concentrations of 1,090,000 cfu/L and 200,000 cfu/L, respectively, but with unknown ST types (laboratory reports from the iWWTP; personal communication: Henry van Herwijnen, October 2022). This confirmed the continued draining of Legionella -contaminated water to the mWWTP, despite the shutdown of the iWWTP aeration tank, and showed the need for the UV-disinfection of the effluent as a control measure. The use of statistical models based on wind direction, taking into account the probability-weighted incubation time and wind velocity, has to our knowledge not been applied before in LD outbreaks. While the outcomes of these models alone may not be sufficient for shaping control measures, the collective evidence, including spatial source identification models and environmental investigations, has played a crucial role in pinpointing the most probable source. Here, model results indicated that mainly the mWWTP was in line with the upwind direction of the patients’ residential address, suggesting that the mWWTP potentially played a larger role than the iWWTP. This could possibly be due to the larger aeration volume, causing more aerosol formation. However, this does not exclude a role of aerolisation of Legionella at the iWWTP in this outbreak, as the outbreak strain might not have been detected in the samples taken at the iWWTP, e.g. because it was present in low concentrations . The mall was also significantly associated with the upwind direction. However, this was considered an unlikely source of infection because only a limited number of patients were exposed to the supermarket misting system, which was the only identified possible source of infection in the mall. Moreover, Legionella was not detected in samples taken from the misting system. The iWWTP identified in this outbreak had introduced an anaerobic treatment for biogas production 2 years before the outbreak, which was temporarily shut down and restarted in the year before the outbreak because of operational difficulties. The shutdown of the anaerobic treatment step reduced the efficiency of the treatment process, which led to higher concentrations of amino acids and nutrients in the warm wastewater, an identified risk for increased Legionella growth . Similar changes in the treatment process were also observed in two previous WWTP outbreaks in the Netherlands, where both sites added an anaerobic treatment for biogas production about 1 year before the outbreak . The combination of an anaerobic treatment, commonly requiring optimal operational temperatures for Legionella proliferation from 30–38 °C, followed by aerobic treatment, is a potential risk factor for rapid Legionella growth in a WWTP and dispersion to the environment . Moreover, these systems are mostly used to process wastewater that is rich in proteins and amino acids, further promoting Legionella growth . Indeed, the measured operating temperature of the iWWTP ranged from 30 °C during winter to 38 °C in summer, while the low operating temperature of the mWWTP (below 25 °C) was unlikely to promote rapid growth of Legionella . However, based on the outcome of the analyses, we assume that the mWWTP may have played an important role in the dispersion of Legionella due to its larger aeration volume, while the iWWTP was most likely the primary source of Legionella growth that contaminated the influx of the mWWTTP. This highlights the importance of taking into account the influent of potentially Legionella- contaminated water in the risk analysis of a WWTP, even for WWTPs that operate at temperatures that are too low for rapid Legionella growth. This is corroborated by previous studies that could not find a clear association between environmental factors, such as temperature, and the presence of Legionella in WWTPs . Taking control measures to prevent Legionella in biological WWTPs is challenging. Unlike wet cooling towers, use of biocides is not possible in the biological treatment process, and draining a WWTP for cleaning and disinfection poses a risk as this may cause contamination of the surface water or the mWWTP to which the water is drained to. In this outbreak, full coverage of the aeration tank was difficult because of the large area and aeration volume of the tank, which may cause overpressure under the cover, requiring air extraction with air disinfection. Furthermore, a complete cover may increase the temperature in an aeration tank which may actually promote the growth of Legionella . Other measures that may be considered for some WWTPs are reducing aerosol formation by changes to the aeration system or using a floating cover, although the effectiveness of such control measures will need to be evaluated. This study has several limitations. Firstly, typing information was only available for three cases, which hampered matching of human cases with typing information from environmental samples. Most LD patients are diagnosed with urine antigen testing or PCR, and a sputum sample for culture is often not available because patients do not have a productive cough. Secondly, both models used the residential addresses of the patients as model input, while cases could have been exposed to Legionella at another location in Houten. However, a previous study showed that spatial exposure mostly occurs at the residential address . Thirdly, we used data on average daily wind direction, while it may have changed during the day. Lastly, information on smoking status was not available for six of 15 patients, which was probably not recorded because they had other underlying health conditions. This could also explain the low number of current smokers among the patients. This outbreak had an unusual male:female ratio with nine females and six males, while usually around 70% of LD patients are male . However, no possible explanation could be found.
This Legionella outbreak underlines the potential of municipal and industrial WWTPs to cause community cases and outbreaks of LD, especially those with favourable conditions for Legionella growth and dissemination, or even non-favourable conditions for growth but with the influx of contaminated water. This is particularly important because not all public health professionals may be aware of the LD risk of WWTPs, illustrated by fact that is not named as a source in the ECDC LD-GIS-tool. An inventory of these potential sources should be readily available for public health authorities to enable a rapid source outbreak investigation in case of a community cluster of Legionnaires’ disease. Furthermore, conducting risk analysis of WWTPs could aid in identifying those at increased risk for Legionella proliferation, thereby enabling of preventive measures.
|
Missed or delayed diagnosis of Kawasaki disease during the 2019 novel coronavirus disease (COVID-19) pandemic | 557cf0eb-222a-4f8c-840d-5773d8b34bd8 | 7196408 | Pediatrics[mh] | |
Forensic medical evaluation of penetrating abdominal injuries | 99977f92-1f22-4f34-bbf5-6f12b376e8e7 | 11372487 | Forensic Medicine[mh] | The frequency of firearms and sharp weapon use, commonly encountered in cases of violence, is alarming. The increase in individual armament and the ease of access to unlicensed weapons contribute to these violent incidents. Sharp objects are the most frequently encountered weapons in violence and injury incidents due to their availability in homes and workplaces, their widespread sale, affordability, and the lack of sanctions for carrying them if they do not meet legal specifications. Articles 86 and 87 of the Turkish Penal Code provide important details on the grading of injuries. These articles specify that injuries may have different legal consequences depending on their severity. Injuries treatable with simple medical interventions are considered the least serious injuries in the eyes of the law and usually refer to easily treatable conditions such as shallow cuts or minor bruises. However, more serious health conditions such as bone fractures, tendon damage, major blood vessel or nerve injuries, or internal organ damage, are not considered treatable with simple interventions. These types of injuries require more complex medical interventions and may result in more serious legal consequences. An injury that causes a life-threatening situation is classified as such when a person’s life is exposed to immediate danger following an injury but can be saved either by the individual’s own bodily resistance or by medical assistance. Importantly, a life-threatening situation must have occurred during the incident; death is not necessary. The fact that the person subsequently recovers, with or without treatment, does not alter this classification. When making a decision, the medical findings (the effect on the person) should be taken into account, rather than the magnitude, severity, or dangerousness of the event that caused the injury. Persistent impairment or loss of function of one of the senses or organs: For this condition to be recognized after the injury, the impairment of the function of one of the senses or organs must be permanent. In Article 86 of the Turkish Penal Code, if the offense of intentional injury is committed with a weapon, a more severe form of the crime occurs and the penalty is increased. Crimes of intentional injury are classified as crimes subject to complaint. However, in cases where the crime is committed against a superior, subordinate, spouse, or sibling, or against a person who cannot defend themselves physically or mentally, or is committed with a weapon, a lawsuit may be filed without a complaint. Article 6 of the Turkish Penal Code defines a weapon as any kind of cutting, piercing, or bruising tool made for use in attack and defense. In our study, we aimed to analyze the demographic characteristics of penetrating abdominal injuries, including the most common age range, the time periods during which the injuries occurred, and the effects of alcohol and substance use on such injuries. We also examined the extent of the injuries, the organs most commonly damaged, the mortality rate, and sought to contribute to the trauma data of our country. The research aims to contribute to more effective management of injury cases by addressing the challenges in forensic medicine practice. It also aims to provide an important reference point for the development of injury prevention and intervention strategies by exploring the social dimensions of such injuries and the legal framework in response to them, providing foundational information for the development of relevant legal and health policies. In our study, we retrospectively reviewed the hospital archives and forensic reports of 28,619 cases who were admitted to the Emergency Department of Kütahya Evliya Çelebi Hospital over a five-year period from January 1, 2016 to December 31, 2020, with the approval of the ethics committee. All cases with penetrating abdominal injuries were included in the study. A total of 85 cases were analyzed. Out of the 28,619 cases screened, 85 (0.29%) were included in the study. After examining the forensic reports of the cases, data were obtained by reviewing the past medical histories of the patients from the hospital’s information management system. The data obtained from the examination were evaluated for demographic characteristics, time of the incident, type of incident, and site and degree of injury using a statistical program. Statistical Analysis The data obtained in the study were analyzed using the IBM SPSS (Statistical Package for the Social Sciences) Statistics 22 program. For quantitative data, descriptive statistics such as mean, standard deviation, median, and maximum-minimum value were used. For qualitative data, frequency tables including frequency and percentage values were utilized. Double or triple cross-tables and chi-square tests were employed to examine the relationship between variables. Cramer’s V was used to calculate the degree and direction of the relationship between the categorical variables. To determine if there was a statistically significant difference between two independent groups regarding a numerical variable, the data were first tested for normal distribution using the Kolmogorov-Smirnov and Shapiro-Wilk tests. As the data did not conform to normal distribution, the Mann-Whitney U test, a non-parametric test, was applied. Column graphs were created. For statistical significance, a 0.05 margin of error and a 0.95 confidence level were set, and the results obtained were statistically significant. Ethics Approval for this study was obtained from the Non-Interventional Clinical Research Ethics Committee of the Rectorate of Kütahya Health Sciences University with decision number 2021/11-20 on June 30, 2021. Since our study was an analytical retrospective study, data were obtained from the hospital health data system. Utmost attention was paid to the privacy of the individuals’ identity information, and it was not shared with anyone outside the study team. Only health data relevant to the study were used; other data were not recorded. The data obtained in the study were analyzed using the IBM SPSS (Statistical Package for the Social Sciences) Statistics 22 program. For quantitative data, descriptive statistics such as mean, standard deviation, median, and maximum-minimum value were used. For qualitative data, frequency tables including frequency and percentage values were utilized. Double or triple cross-tables and chi-square tests were employed to examine the relationship between variables. Cramer’s V was used to calculate the degree and direction of the relationship between the categorical variables. To determine if there was a statistically significant difference between two independent groups regarding a numerical variable, the data were first tested for normal distribution using the Kolmogorov-Smirnov and Shapiro-Wilk tests. As the data did not conform to normal distribution, the Mann-Whitney U test, a non-parametric test, was applied. Column graphs were created. For statistical significance, a 0.05 margin of error and a 0.95 confidence level were set, and the results obtained were statistically significant. Approval for this study was obtained from the Non-Interventional Clinical Research Ethics Committee of the Rectorate of Kütahya Health Sciences University with decision number 2021/11-20 on June 30, 2021. Since our study was an analytical retrospective study, data were obtained from the hospital health data system. Utmost attention was paid to the privacy of the individuals’ identity information, and it was not shared with anyone outside the study team. Only health data relevant to the study were used; other data were not recorded. Of the 85 patients with penetrating injuries to the abdominal cavity, 74 (87.1%) were male and 11 (12.9%) were female. The mean age was 31.3±13 years, with the youngest being 12 years old and the oldest age 81 years old. The most common age range was 21-30 years (40%). The mean age for both sexes was again 31 years. When analyzing the time intervals in which the incidents occurred (dividing the day into three 8-hour periods), it was observed that while there was no difference among women, a notable concentration of incidents among men occurred during the evening and night hours. The most incidents were recorded between 20:00-04:00 hours, accounting for 64.9%, while the fewest occurred between 04:00-12:00 hours, accounting for 10.8%. When categorizing the locations of the incidents into urban centers and districts, it was found that 83.5% of the incidents occurred in the urban center, and 16.5% in districts. When the origins of the injuries were analyzed, it was found that 87.1% were caused by intentional injury, 5.9% by accidents, 5.9% by suicide, and 1.2% by animal (boar) attacks . When analyzing the distribution of origins by gender, it is observed that the rate of victims of intentional injury is the highest in both genders . When the distribution of the origins according to the time of day was analyzed, it was found that intentional injuries were most common, occurring at a rate of 66.2% between 20:00-04:00 hours . In four of the five suicide cases, it was found that a sharp instrument was used, one case involved a firearm, all of them resulted in anterior abdominal injuries, one case had no injury to the abdominal organs, one case involved a stomach injury, one case a liver injury, and two cases had intestinal injuries. Four patients had a single wound, and one patient had 11 wounds. It was found that five of the wounds of the patient diagnosed with psychosis, who injured himself in 11 places with a sharp instrument, penetrated into the abdominal cavity, one penetrated the pericardium, and there was also a liver laceration and left ventricular injury; he was operated on and discharged. According to the evaluation made based on the instrument used in the injury cases, it was found that the most common injuries were stab wounds with a rate of 69.4%, firearm injuries with a rate of 27.1%, and other causes (falls from a height, harvester accidents, and animal attacks) with a rate of 3.5% . Of the firearm injuries, 52% were gunshot bullet injuries, and 48% were shotgun pellet injuries. In all categories, the rate of stab wounds was higher than that of firearm injuries . Of the seven cases admitted as deceased, four were due to firearm injuries and three were due to stab wounds. While the majority of the total number of instruments used were for stabbing, firearm wounds were more common than stab wounds in cases that resulted in death. When we examine the instruments used according to gender, we observe a high rate of stabbing in both genders, while males have a higher rate of firearm injuries than females. However, this difference was not found to be statistically significant (p=0.43). In the forensic reports of 23 cases with firearm injuries, it was noted that localization was described in all cases, 11 of them had multiple wounds due to pellet injuries, eight of the 12 cases with gunshot wounds were described as having entry and exit wounds, and the nature of the wound was not mentioned in four of them. When analyzing the alcohol levels of the cases upon their arrival at the hospital after the incident, it was found that alcohol was detected in 36.5%, not detected in 30.6%, and not tested in 32.9%. Of the cases where alcohol was detected, the levels were between 0-50 mg/dL in 7.1%, between 50-100 mg/dL in 4.7%, and higher than 100 mg/dL in 24.7%. When alcohol values were analyzed according to gender, it was found that 72.7% of the women were not analyzed for alcohol, 18.2% were not detected alcohol and 9.1% were detected alcohol, whereas 40.5% of the men were detected alcohol, 32.4% were not detected alcohol and 27% were not analyzed for alcohol . The rates of alcohol analysis in male subjects were statistically significant compared to female subjects (p=0.002). When analyzing the alcohol values of the cases according to the time of admission to the hospital, it was observed that 30.8% of the cases admitted between 20:00-04:00 hours had an alcohol value higher than 100 mg/dL, while only 4.5% of the cases between 12:00-20:00 hours had such high alcohol values . When analyzing the presence of alcohol according to the origin of the injury, it was found that alcohol was detected in 39.2% of the cases of intentional injury, 32.4% of the cases where alcohol was not detected, and 28.4% of the cases where alcohol was not tested. Alcohol was detected in 40% of suicide cases. Alcohol was not tested in 80% of accident cases. It was determined that 48% of the cases in which alcohol was detected were between the ages of 21-30, and 29% were between the ages of 31-40. In our study, the impact of alcohol levels on injury severity was analyzed. The relationship between alcohol levels and the necessity for surgery was not statistically significant (p=0.698). Similarly, the relationship between alcohol levels and the length of hospital stay was not statistically significant (p=0.341). Additionally, the relationship between alcohol levels and the likelihood of being admitted as deceased was not statistically significant (p=0.906). When analyzed according to whether they underwent surgery by general surgery or not, 81% of the cases required surgery, 13% did not require surgery, and 6% died without undergoing surgery. When examining the organs damaged as a result of injuries penetrating the abdominal cavity, it was found that all abdominal organs were intact in 25.9% of the cases. Nearly half of the cases, 44.7% had a single organ injury, while 23.5% had damage to more than one organ . Including the cases involving multiple organ damage, the small intestine was the most frequently injured organ, affected in 23.7% of cases, followed by the liver at 18.9%, and the stomach at 13.1% . The gallbladder and pancreas were the least frequently injured organs, each affected in 3.6% of cases. Since 5.9% of patients died, there is no data on organ injuries in our hospital . When we analyzed for organ dysfunction or loss, we found that 72 (84.7%) patients experienced no loss or dysfunction of abdominal organs, 7 (8.2%) patients suffered intra-abdominal organ loss, and 6 (7.1%) patients died. Among the surgeries for organ loss, there were 2 splenectomies, 1 nephrectomy, 2 cholecystectomies, 1 combined splenectomy and distal pancreatectomy, and 1 combined splenectomy and nephrectomy. It was found that extra-abdominal organ loss (an eye) occurred in 1 case, which was not included in these rates. It was found that 6 of the cases resulting in organ loss were caused by stab wounds and 1 was caused by a firearm injury. When analyzing the origins of the cases that resulted in organ loss, it was observed that all were due to intentional injury crimes, and no organ loss occurred in cases with other origins. Looking at the number of wounds across the entire body, 45 (52.9%) had a single wound, 10 (11.8%) had 2 wounds, 10 (11.8%) had 3 wounds, and 25 (23.5%) had more than 4 wounds. The average number of wounds was 3.6. The average number of wounds from firearms was 5.8, and 2.7 from stab wounds. Since the distribution of shotgun pellet injuries was not described in detail, it was assumed that these injuries occurred from a single shotgun shot. While the median number of injuries was 1 in living patients, it was 5 in patients who died. There was no statistically significant difference in the number of injuries between the patients who died and those who did not (p=0.061). The number of wounds did not contribute to mortality. The rate of female patients with more than 3 wounds was 45%, while this rate was 20% in male patients. When analyzing the abdominal injuries according to the direction of penetration, it was found that 66 (77.6%) of the cases had anterior abdominal injuries, 11 (12.9%) had injuries penetrating the abdominal cavity from the posterior part of the body, and 8 (9.4%) had lateral injuries. Upon examining other injuries encountered in addition to those in the abdominal cavity, it was found that 47.1% of the cases had no extra-abdominal injury, 24.7% had lung injuries (pneumothorax, hemothorax), 36.4% had extremity injuries, 3.5% diaphragm injuries, 2.4% had heart injuries, and 1.2% had facial injuries . The mean duration of hospitalization was 9 days . The hospitalization duration ranged from 1 to 10 days for most patients, while it exceeded 20 days for a few patients. When the cases were evaluated for bone fractures, it was found that 75 (88.2%) had no bone fractures, 4 (4.7%) had rib fractures, 3 (3.5%) had fractures in facial and extremity bones, and 3 (3.5%) had no data on bone fractures because they were deceased upon arrival. When analyzing the origins of the cases with bone fractures, it was observed that all were due to intentional injury. It was determined that 70 (82.4%) of the cases had no arterial injury, 7 (8.2%) had intra-abdominal arterial injuries, 2 (2.4%) had extra-abdominal arterial injuries, and 6 (7.1%) had no data on vascular injuries because the patients died. It was found that 52 (61.2%) of the cases had intra-abdominal bleeding, 28 (32.9%) did not have intra-abdominal hemorrhage, and 5 (5.9%) had no data because they were deceased . When analyzing the mortality rates of injuries to the abdominal cavity, it was found that 78 (91.8%) of the cases were discharged, and 7 (8.2%) died in the hospital. It was determined that 6 of the cases who died had been in cardiac arrest before arriving at the hospital, were admitted to the emergency department accompanied by cardiopulmonary resuscitation (CPR) from the 112 team, did not regain respiration or circulation, and were declared deceased, while 1 case was declared deceased after the first 24 hours. When examining the origins of the cases admitted as deceased, it was understood that all were of intentional injury origin. The mean age of the cases admitted as deceased was 29.6 years. In our study, the abdominal injuries admitted to Kütahya Evliya Çelebi Hospital over a 5-year period were analyzed. It was found that most victims of such injuries were around 30 years old, predominantly male, and victims of violence. It was observed that the most common time for admission was at night and that the injuries were mostly inflicted with sharp instruments. While the majority of injuries were caused by sharp instruments, firearm injuries were more common in cases resulting in death. The fact that young men are at higher risk is often attributed to social and psychological factors, where risky behaviors are more common. Alcohol and substance use may be more prevalent, and tendencies toward conflict or violence may be higher. It was noted that the nature of the wound was not mentioned in four cases. Errors and omissions in forensic reports can frequently occur in emergency departments. In forensic reporting, accurate localization and description of wounds, and identification of entry and exit wounds in gunshot cases are crucial for the forensic process. Conclusions about the crime tool used can be made by evaluating the findings on the skin from knives, which are frequently used in cutting and piercing injuries. In some incidents, injuries may involve more than one defendant and knife. It is very important in forensic reporting to determine whether the injury on the person’s body has a skin-subcutaneous course, affects deep soft tissues (muscle and fascia), crosses the peritoneum, and/or causes internal organ injury. These factors are important determinants in the severity of punishment received by the defendant. Detailed descriptions of surgical interventions, operation notes, and the lesions observed on the person’s body before the first intervention are critical for guiding forensic medicine practices. When analyzing the alcohol levels of the cases upon their arrival at the hospital after the incident, it was found that alcohol was detected in 36.5%, not detected in 30.6%, and not tested in 32.9%. In the study of Altun et al. on sharp object injuries in living subjects, 39% of the subjects were found to be alcoholic, 32% non-alcoholic, and 29% had no alcohol information. In a study by Bilgin et al. on forensic autopsy cases involving stab wounds, alcohol was detected in 34.6% of the cases, and narcotic-drug substances were detected in 4.7%. We believe that examining substance use in addition to alcohol analysis in cases of suicide and violence-oriented incidents will be useful in clarifying the forensic process and determining the underlying causes. It was observed that alcohol tests were requested less frequently for female cases. Considering that these cases are forensic in nature, and that the use of alcohol and drugs is also important in the follow-up and treatment of penetrating abdominal trauma, it is necessary to perform these analyses in all forensic cases. Alcohol was not tested in 80% of the accident cases. In emergency conditions, it was observed that the rate of requesting alcohol tests varied according to the type of injury. When examining the results of alcohol levels on injury severity in our study, the relationship between alcohol levels and injury severity (surgery, hospitalization time, and emergency admissions) was not statistically significant. Göksu et al. found that blood ethanol level did not affect the duration of hospitalization or the mortality rate in a study conducted on patients admitted to the hospital emergency department due to traffic accidents. Afshar et al. investigated the relationship between alcohol and injury and death in trauma patients and reported that the mortality rate was highest in the group with moderate blood alcohol concentration, and lowest in the group with very high blood alcohol concentration. When analyzed according to whether the patients underwent surgery by general surgery or not, 81% of the cases required surgical intervention, and the organ most frequently injured was the small intestine, affected in 23.7% of cases. In a study conducted by Badak et al. on abdominal sharps injuries, injuries were reported as follows: 28% to the small intestine, 14.6% to the spleen, 12.1% to the liver, 10.9% to the colon, and 7.3% to the stomach. When analyzing organ dysfunction or loss, we found that 84.7% had no loss or dysfunction of abdominal organs, and the most common organ loss was the spleen. It was also observed that the most common surgical procedure performed for blunt abdominal trauma was a splenectomy. In the Turkish Penal Code, the crime of intentional injury under crimes against bodily inviolability is defined in Article 86, and the crime of injury aggravated by consequence is defined in Article 87. Paragraph 2b of Article 87 defines the crime of aggravated wounding, where the loss of function of one of the senses or organs constitutes the qualified form of the crime and results in an increase in the punishment received by the offender. In this context, the loss of organ function in penetrating abdominal injuries is significant. In our study, 58 patients had organ injuries, and 7 patients experienced intra-abdominal organ loss. The average number of wounds was 3.6, with an average of 5.8 wounds in firearm injuries and 2.7 in sharp object injuries. The higher number of wounds from firearms may be attributed to the potential for both entry and exit wounds, which increases the total count. Additionally, the higher number of wounds could be due to the ease of shooting, as no interpersonal struggle is required and the distance between individuals is greater with firearms than with sharp objects. In the study by Altun et al., 53% of the cases had a single injury, 22.7% had 2, 10.9% had 3, 13.3% had 4 or more lesions. In Derkuş’s study, it was observed that 54.6% of the cases had 1 injury, 18.3% had 2 injuries, 11.2% had 3 injuries, and 15.9% had more than 3 injuries. While the mean number of injuries in living patients was 3.4, the mean number of injuries in deceased patients was 5.4. There was no statistically significant difference in the mean number of wounds between deceased and non-deceased patients (p>0.05). It was found that the number of wounds did not contribute to mortality. In Uysal’s study, it was also found that the number of injuries did not contribute to mortality (p>0.05). When analyzing the abdominal injuries according to the direction of penetration, it was seen that 77.6% of the cases had anterior abdominal injuries. In the study by Kurt et al. on sharp penetrating injuries to the abdomen, it was found that 7.7% of the cases penetrated the abdominal cavity from the posterior and flank, while 92.2% of the cases had penetration in the anterior abdominal cavity. When we examined the other injuries encountered in the body in addition to injuries to the abdominal cavity, we found that 47.1% of the cases had no extra-abdominal injuries, 24.7% had lung injuries (pneumothorax, hemothorax), 36.4% had extremity injuries, 3.5% had diaphragm injuries, 2.4% had heart injuries, and 1.2% had facial injuries. In Uysal’s study, 28.1% of the cases had extremity injuries and 10.2% had head and neck injuries. Muratoğlu’s study on deaths due to penetrating injuries found that 12.5% had thoracic injuries, 7.7% abdominal injuries, 5.2% extremity injuries, and 35.4% injuries in more than one region. In Polat’s study on blunt and penetrating abdominal injuries, 25% had thoracic and 25% had extremity injuries. It was found that 61.2% of the cases had intra-abdominal bleeding, 32.9% did not have intra-abdominal bleeding, and 5.9% had no data because they died. In Taçyıldız’s study on penetrating abdominal traumas, intra-abdominal hemorrhage exceeding 1000cc was found in 59.5% of the cases. The mean age of the patients who were admitted as deceased was 29.6 years. In Taçyıldız’s study, the mean age of deceased patients in cases of penetrating abdominal trauma was 31.2 years. All our cases involved life-threatening injuries, as they all were patients with injuries to the abdominal cavity. The absence of death and recovery does not change this situation in legal terms. In the forensic traumatological evaluation of all cases, it was observed that the effect of the injury on the person was ’not mild enough to be resolved by simple medical intervention.’ Likewise, cases that do not require surgery or organ damage do not change this situation. Attention should be paid to these issues in forensic reporting. Forensic medicine experts may be expected by the courts to determine as experts whether the wounds in persons injured with a sharp instrument were self-inflicted or caused by another person during a struggle. Forensic medicine reports are crucial for distinguishing between the crime of attempted intentional homicide and the crime of intentional injury. In cases of intentional killing, where the result can be separated from the act, if the perpetrator could not complete the executive acts of the crime he started due to reasons beyond his control (i.e., if the victim did not die), the crime is considered intentional killing. At this point, it is important to differentiate between attempted intentional homicide and intentional injury. The determination of attempted intentional killing or intentional wounding is made by considering factors such as the targeted body area, the number and severity of the blows, the nature of the wounds, whether the act ended spontaneously or due to an obstacle, and the perpetrator’s behavior towards the deceased or the victim after the incident. The localization of the wounds, their characteristics, severity, and number are important in this context. Therefore, wounds should be accurately described in forensic reporting. Our case involving a patient diagnosed with psychosis who injured himself with a sharp instrument in 11 places—5 of which penetrated the abdominal cavity and one penetrated the pericardium, resulting in liver laceration and left ventricular injury—illustrates how seriously a person can injure himself. Suicidal behavior is a significant psychiatric issue often seen in mental disorders. Compliance of the person with mental disorders with treatment may be impaired. This may also necessitate inpatient treatment depending on the patient’s clinical condition. In this context, an important issue in the inpatient treatment of psychiatric patients is consent. Article 432 of Civil Code No. 4271 stipulates that freedom can be restricted for protection purposes. Under this legal regulation, individuals with mental illness, mental impairments, alcohol or drug addiction can be hospitalized for treatment against their will, following a medical board report, when there is a risk of harm to themselves or others. Everyone has the right to report such situations to the authorities. Injuries to the abdominal cavity are among the most common types encountered in emergency departments and are frequently reported in forensic medicine. These injuries are considered life-threatening due to their penetration into the abdominal cavity. In our study, we analyzed demographic characteristics, times of injury, types of injuries, and their outcomes. Penetrating injuries to the abdominal cavity were most commonly inflicted with sharp instruments and, secondarily, with firearms, and were typically related to violent incidents. The majority of the cases involved young adult males, and the incidents predominantly occurred during the night hours. The rate of alcohol consumption was found to be high. There was a tendency to request fewer alcohol tests in emergency services during first encounters, in cases involving females, and in non-violent cases. It was observed that half of the cases received a single injury blow, and the majority of the injuries were to the front of the body. Most cases required surgical intervention. The organs most frequently damaged were the small intestine and liver, with the spleen being the most commonly lost organ. Bone fractures and arterial injuries were less common. The mean duration of hospitalization was 9 days, and the mortality rate for injuries to the abdominal cavity was 8.2%. However, 6 of the 7 patients who died from penetrating abdominal injuries were admitted as deceased cases, and one patient, known to have sustained a splenic injury, died 9 days after admission. Penetrating abdominal injuries require careful evaluation and meticulous planning for surgical intervention. Optimizing surgical interventions is critical both for protecting patient health and for achieving the best possible outcomes. At this point, triage and evaluation, patient-specific planning, a minimally invasive approach, a multidisciplinary approach, emergency preparedness, adequate blood and blood product supply, and postoperative follow-up are important. In each case, the most appropriate intervention method should be determined by considering the specific situation and needs. Alcohol and substance abuse are more common in forensic traumatic cases than in the general population. The severity of the injury may cause life-threatening internal organ or vascular injuries. Substance and alcohol use may complicate the interpretation of the clinical picture and the management of the case. In our study, it was observed that substance analysis was not requested in the cases, and alcohol testing was predominantly performed in male cases. In forensic traumatic cases, it would be useful to request both alcohol and drug tests to clarify the clinical process and enhance the accuracy of forensic reporting. Various factors affect the length of hospital stay. These factors can range from the general health status of the patient to the severity of the injury, the patient’s age and comorbidities, treatment methods, presence of complications, quality of postoperative care, and social and psychological factors. Detailed analysis of data collected in emergency departments allows for a better understanding of trauma cases and the identification of risk factors. These data can contribute to the development of forensic and public health policies. For research, detailed epidemiologic studies are recommended to understand the demographic distribution of trauma-related deaths and injuries. Forensic evaluation of traumatized cases is particularly important in identifying cases of violence and abuse. The forensic medical examination processes of such cases should be integrated into emergency department protocols. Collaboration with public health agencies can help prevent a wide range of trauma-related health problems. These collaborations can develop early intervention strategies for chronic health issues and psychological problems that may develop as a result of trauma. National policies and regulations should be developed for trauma care in emergency departments, and the necessary resources should be provided for the implementation of these policies. |
Evaluating Neonatal Telehealth Programs Using the STEM Framework | a454d065-5f8d-4eb4-b11a-809671e093dc | 8693890 | Pediatrics[mh] | Several models for measuring telehealth have been published, most cited of which are from the National Quality Forum and World Health Organization. These frameworks focus predominantly on health care quality domains instead of health outcomes and have not been applied to perinatal health. An evaluation toolkit developed by Supporting Pediatric Research in Outcomes and Utilization of Telehealth (SPROUT) reorganizes these measure concepts into a health outcomes centric model. Specific information about each framework is discussed below.
The national quality forum's Telehealth Measurement Framework. is a comprehensive review that identified existing measures and measurement concepts, organizing them into four domains (with subdomains): Access to care, Financial impact/cost, Experience, and Effectiveness. Access refers to the ability of patient, caregivers, and family members to receive care from the providing team and exchange relevant clinical information. Financial impact/cost effects are those affecting patient/family, care team, health system, payer and society. Experience refers to the usability and effect of telehealth on patient/family, care team member, and community and whether the care meets expectations. Effectiveness is measured at the system, clinical, operational and technical level in which health outcome is under the subdomain of clinical effectiveness. Across these domains, the NQF further defines 53 measure concepts in six key areas: travel, timeliness of care, actionable information, added value to provide evidence based best practices, patient empowerment and care coordination. The NQF framework explains how to develop measures that predominately focus on evaluating telehealth's ability to deliver high quality healthcare. Importantly, it emphasizes the perspectives from four stakeholder groups (patient, care team, health system, payers) as well as the need to understand the impact of a telehealth program on the community. However, safety is included only as a patient experience and not as a health system factor. While health outcome is included in the clinical effectiveness section, it is not an essential part of evaluating any telehealth initiatives. In their appendices, the authors provide a comprehensive list of measure concepts that are mostly adult related, but nevertheless exemplifies how perinatal measures could potentially be derived.
In 2016, the World Health Organization (WHO) with several collaborators, offered a measurement strategy that differentiates “monitoring” as measuring functionality, fidelity, stability, and quality of the telehealth system from:”evaluation” as measuring usability, feasibility, efficacy, effectiveness, and economic/financial effects of the telehealth system. In addition, the WHO recommended that evaluators consider the technology's implementation stage (concept, prototype, pilot, demonstration, scale-up, integration/sustainability) when deciding on which measurement area(s) to focus on. During prototype and pilot stages of a new telemedicine program, assessment focuses on whether the system is: - Functional: meet technical specifications. - Feasible: works as intended in a given context. - Stable: have acceptable technical failure rates during normal and peak use. - Usable: can be used as intended by users. As programs mature, it becomes relevant to assess whether users in the field can consistently accomplish the stated objectives (fidelity) and whether the intervention's quality level is able to yield the intended outcomes. At the scale-up/integration implementation stage, evaluators can study whether the system demonstrates measureable impact to processes and outcomes (efficacy), and how close is the user able to reach best or potentially better practice standards using the system in the field (effectiveness). Relative to these measures, quantifying cost and resource expenditures would also be important. While the WHO model has the advantage of offering programs an evaluation roadmap from inception to scale, like the NQF, it focuses mainly on telehealth use in the adult setting. Furthermore, it does not emphasize tracking health outcomes until later in the implementation cycle. We suggest that evaluators should clearly identify and articulate the clinical health outcomes potentially affected by telehealth at the prototype stage, even if these outcomes are measures that may take time to change or are dependent on other non-telehealth factors. This recommendation comes from the experience that system changes like telehealth implementation is costly and therefore, it is not enough to identify how healthcare delivery will be better, but also which healthcare outcomes we hope to improve . A 2016 AHRQ systematic review illustrated the critical connections between telehealth interventions and clinical outcomes. Filtering over 1400 citations, the authors summarized 58 systematic reviews and reported the level of evidence on association between telehealth use and outcomes such as mortality, quality of life, and reductions in hospital admissions. Telehealth use included communication, counseling, and monitoring of chronic conditions such as cardiovascular and respiratory disease. However, in the area of maternal and child health, the authors concluded that while there could be enough primary studies to constitute some evidence (e.g. showing no benefit for home uterine monitoring), additional studies and systematic reviews are warranted.
The American Academy of Pediatrics Section on Telehealth Care's SPROUT has combined the invaluable work of organizations described above with its member expertise into a toolkit called SPROUT Telehealth Evaluation and Measurement (STEM). STEM's four measurement domains: (1) health outcomes, (2) health delivery - quality and cost, (3) experience, and (4) program implementation and key performance indicators (KPIs) cover themes that are relevant to all four stakeholder groups in varying degrees . STEM's first domain, health outcomes, is arguably the most critical one because these measures represent the end goal of all efforts to deliver high quality healthcare – to make patients healthier. This domain includes clinical measures of individual or populations, many of which are already collected in large neonatal data registries such as the Vermont Oxford Network (VON) and the Children's Hospitals Neonatal Database (CHND). , The National Quality Forum and Center for Medicare and Medicaid Services endorses a few of these measures related to neonatal infection and perinatal complications. This domain also includes mental health measures such as anxiety, depression, and stress (i.e., Center for Epidemiologic Studies Depression Scale, Impact of event Scale – Revised, NICU Parental Stress Scale, Patient reported outcomes) as well as assessment of burnout in providers. , , – Metrics associated with the provision of healthcare services are in the second domain – the quality and cost of healthcare delivery. This domain includes most of the National Academy of Medicine's quality constructs (safety, timeliness, patient centeredness, effectiveness, equity). plus cost/resource burden measures. Most of NQF's domains and subdomains map onto STEM's second domain. Examples include percent of pregnant mothers receiving timely prenatal care, percent of mothers who got education on breastfeeding, referral and completed visit rate to high-risk obstetricians when needed, access to mental health wellness programs during the perinatal and postpartum period, number of safety issues encountered per patient treated. Other examples look at the timely access to pediatric subspecialists and how to deliver best equitable practices via tele-consultation, tele-coaching and tele-training. Such compliance with “clinical pathways” has recently been trackable through monitoring of electronic order set usage. and HL-7 formated message exchanged in hospital information systems. It is important to measure costs in dollars and resource expenditures of a tele-resuscitation or teleconsultation program for both, the originating (location of patient) and remote (location of consultant(s) sites. The cost/savings impact of telehealth encounters includes miles spent or saved, cost incurred or avoided, and workdays and school days lost or gained for caregivers and providers. Assessing safety events can be tracked through the hospital's existing safety reporting systems and quality/safety departments. Measures of equity and related social determinates of health are increasingly important, as COVID-19 has uncovered wide gaps in technology penetrance in underserved populations. Variables to track include caregiver's ethnicity, race, gender, language preference, payer mix, census-based markers like the social vulnerability index, and social determinates. Understanding associations between disparities and health outcomes, delivery quality and cost is critical to ensuring that all patients can benefit from judicious implementation of telemedicine. To make telemedicine systems more effective in delivering better care and health outcomes, implementers need to understand the provider and patient/family's experiences. STEM's third domain measures the individual experience and the logistical impact/changes these encounters have on their daily lives. Published assessment tools such as the Telehealth Usability Questionnaire, Patient Assessment on Communication in Telemedicine (PACT), TSUQ, and Net Promoter Score administered to NICU parents and providers can assess the usability of technology, satisfaction with the communication between providers and patient, and likelihood of recommendation. , , – The Fourth domain encompasses Key Performance Measures that describe the operational aspects of the Telehealth program – number of video visits or tele-resuscitation sessions, number and type of technology issues, types of conditions addressed, number of patients enrolled, the size of the telehealth network and number of partnering institutions, operational costs and staffing expenditures. These measures are typically important towards the enterprise's overall strategic and budget; therefore, they can overlap with measures in other domains – e.g., cost effective analyses (domain 2) and KPI's (domain 4). When assessing a particular telehealth program/initiative, telehealth evaluators are encouraged to identify 1,2 measures assignable to each STEM domain. While many measures of clinical outcomes and health delivery quality and cost offers objective data points and can be found in existing data sources, they should undergo statistical testing for reliability and validity. Likewise, surveys asking for individual opinions, experiences and preferences can yield rich subjective data but must be carefully distributed and worded to mitigate sampling and responder bias. Stakeholders can use data differently and people's perception and relative value of this information may alter their benefits to costs analysis. Understanding what different stakeholders perceive to be telehealth's benefits and cost can help implement telehealth more effectively. To illustrate, shows how to use the STEM toolkit table for two perinatal telemedicine interventions; (1) teleconsultation for newborn resuscitation in community hospitals and (2) post discharge video visits to patient homes. The intervention column describes each telehealth intervention. The data capture method is stated beneath each domain to highlight the importance of identifying reliable data sources early in the evaluation planning process. The domain one column defines the health outcomes belonging to each intervention - in our examples they are, respectively, first NICU admission temperature and average weight gain within six months after discharge. The domain 2 column states the health delivery quality/cost measures. For tele-resuscitation, adherence with a neonatal resuscitation practice pathway to manage airway emergencies (called MRSOPA. ) may be measured by video recording review. For post discharge video visits, healthcare utilization and safety catches may be monitored by the electronic medical record and locally used safety reporting systems. The domain three column describes the attitude and experience of patient, caregiver, provider, and other stakeholder towards the telemedicine process, including appointment scheduling, technology's usability, and satisfaction with the encounter. The last domain, program key performance indicators (KPI), describes summary statistics that are important to the hospital administration, such as encounter completion rates, incidence of technical issues, average cost to sustain the program, and benchmarks with other similar telemedicine programs. Once the team has defined the STEM dataset variables for the telemedicine intervention, they can assess equity by measuring and comparing the variables amongst different disparity cohorts. In the post discharge video visit example, weight gain, readmissions, and patient satisfaction may be compared between patient cohorts living in areas with high and low social vulnerability indices.
The telemedicine implementer's dilemma is often deciding how best to integrate telemedicine into existing workflows in ways that would lead to better and measurable health outcomes and delivery quality. The driver diagram is a powerful quality improvement tool that links SMART aims (Specific, Measurable, Attainable, Relevant and Time-bounded) with interventions that can achieve those aims. Health outcomes are often measures that may not improve immediately whereas healthcare quality measures can be improved more quickly and therefore make good targets of SMART aims. STEM's domain two (Healthcare quality and cost) and three (Individual Experience) aligns nicely with what is typically measured in a QI project. In the tele-resuscitation example , the health outcomes are neonatal morbidity and mortality rates while the SMART aim is “Achieve 95% concordance with neonatal resuscitation steps in community nurseries that are not staffed by neonatologists within 12 months.” This drivers diagram, best constructed by a stakeholder group composed of neonatal and obstetric clinicians, nurses, local physicians and respiratory therapists, identified its key drivers to be (1) resuscitation team having necessary skills and knowledge of best practice, (2) availability of expert consultants, and (3) teamwork. Note that up to this point, the SMART aim and drivers are not linked to telemedicine. The next steps are where the team identifies telemedicine interventions that could help accomplish the stated drivers and explain how each intervention benefits the baby being resuscitated. These benefits mapped back onto STEM domains/subdomains, completing the link from the main health outcome to interventions and STEM.
The value equation can be summarized as benefits over costs where benefits are variables that add value when they increase, and costs are variables that lower value when they increase. Examples of “benefit” variables are measurements of quality, efficacy and safety in telemedicine care while examples of cost are resource usage and dollars spent delivering care. Differences in value perspective from each stakeholder type (patient and family, provider, health system, payor, and policymaker) could result in synergistic or oppositional levels of support for a telemedicine intervention. Sometimes patients and providers are placed in conflict with non-clinical stakeholders - a conflict that has shown itself in situations where payers believe that a treatment's costs outweigh its benefits, such as bone marrow transplantation for treatment-resistant breast cancer or coverage of antiviral treatment for hepatitis C, but other stakeholders such as patients and providers disagree. The ability for stakeholders to view and understand each other's value perspectives is needed to create a better health care delivery system. Patient and family To parents, high value healthcare not only includes better clinical health of their babies, but also seeing relief of their baby's pain, effective communication from care teams to them and with each other, greater closeness and bonding with their baby, among others. These factors are counterbalanced by higher out of pocket healthcare expenditures, loss of work or school days, and medical harm. Often, parents do not consider their own wellbeing to be part of the “high value healthcare” of their child. Provider To perinatal providers, high value healthcare would include maternal and neonatal outcomes and the health of the caregivers like stress and anxiety. Helping caregivers cope with the psychological effects of having a baby in the NICU could help the child's long-term outcomes because higher levels of maternal stress have been associated with receptive language and adjustment problems at four years old. Other high value factors to providers include the system's ability to help them deliver best and safer care, and higher reimbursement rates. In contrast, variables that lower healthcare value include waste (i.e., excessive waiting time, inefficiencies in process and workflows, defective equipment), avoidable readmissions and medical errors. Health system To health systems, a high value perinatal program typically shows improving neonatal outcome rates over time and comparable or better benchmarking with similar programs. Higher payer reimbursement rates are valuable to the health system and supports ancillary services like laboratory and diagnostic suites, other clinical services that often consult in the NICU like genetics and pulmonary as well as research and innovation. Variables that lower value are higher operational cost, waste, and medical errors. Whether avoidable readmissions are a bottom-line cost or benefit to health systems depends on whether their payer contracts impose penalties or not. Payer To payers, a high value perinatal program is typically one that delivers the best neonatal and maternal outcomes for its plan members at the lowest monetary cost. This rather cynical view has merit because it drives more efficient and effective evidence-based health care. The Center for Medicare and Medicaid Services and commercial payers are becoming proponents of value-based reimbursement models where providers are paid depending on patient outcomes rather than on volume of procedures completed. A result of this has been bundled payments that include payment for performance of quality measures such as postpartum visit rates, where health systems are responsible for cost management but still incentivized to adhere to best practices. To a degree, such strategies help align the value equation between payers, health systems/providers, and patients such that, for example, higher avoidable readmissions become a cost to all stakeholders. However, implementation will only be successful when such strategies are created and executed through collaboration with all stakeholders making their value equations transparent. Lawmaker Lawmakers are critical stakeholders who can enact laws and regulations that drive provision of high-quality healthcare. Their role in this system highlights the impact health care systems have on communities and society. Examples include those listed by the CDC's community health status indicators (i.e., no care in first trimester, infant mortality disparities). Lawmakers could be concerned with how health care provisions impact unemployment rates, and school attendance rates in the community.
To parents, high value healthcare not only includes better clinical health of their babies, but also seeing relief of their baby's pain, effective communication from care teams to them and with each other, greater closeness and bonding with their baby, among others. These factors are counterbalanced by higher out of pocket healthcare expenditures, loss of work or school days, and medical harm. Often, parents do not consider their own wellbeing to be part of the “high value healthcare” of their child.
To perinatal providers, high value healthcare would include maternal and neonatal outcomes and the health of the caregivers like stress and anxiety. Helping caregivers cope with the psychological effects of having a baby in the NICU could help the child's long-term outcomes because higher levels of maternal stress have been associated with receptive language and adjustment problems at four years old. Other high value factors to providers include the system's ability to help them deliver best and safer care, and higher reimbursement rates. In contrast, variables that lower healthcare value include waste (i.e., excessive waiting time, inefficiencies in process and workflows, defective equipment), avoidable readmissions and medical errors.
To health systems, a high value perinatal program typically shows improving neonatal outcome rates over time and comparable or better benchmarking with similar programs. Higher payer reimbursement rates are valuable to the health system and supports ancillary services like laboratory and diagnostic suites, other clinical services that often consult in the NICU like genetics and pulmonary as well as research and innovation. Variables that lower value are higher operational cost, waste, and medical errors. Whether avoidable readmissions are a bottom-line cost or benefit to health systems depends on whether their payer contracts impose penalties or not.
To payers, a high value perinatal program is typically one that delivers the best neonatal and maternal outcomes for its plan members at the lowest monetary cost. This rather cynical view has merit because it drives more efficient and effective evidence-based health care. The Center for Medicare and Medicaid Services and commercial payers are becoming proponents of value-based reimbursement models where providers are paid depending on patient outcomes rather than on volume of procedures completed. A result of this has been bundled payments that include payment for performance of quality measures such as postpartum visit rates, where health systems are responsible for cost management but still incentivized to adhere to best practices. To a degree, such strategies help align the value equation between payers, health systems/providers, and patients such that, for example, higher avoidable readmissions become a cost to all stakeholders. However, implementation will only be successful when such strategies are created and executed through collaboration with all stakeholders making their value equations transparent.
Lawmakers are critical stakeholders who can enact laws and regulations that drive provision of high-quality healthcare. Their role in this system highlights the impact health care systems have on communities and society. Examples include those listed by the CDC's community health status indicators (i.e., no care in first trimester, infant mortality disparities). Lawmakers could be concerned with how health care provisions impact unemployment rates, and school attendance rates in the community.
In conclusion, telehealth is a health delivery tool offering opportunities to improve neonatal outcome and care delivery. A standard approach to evaluating neonatal telehealth programs would allow data to be aggregated across multiple health systems, making studies of rare conditions and comparisons of different locations and methods for delivering services via telehealth possible. STEM offers a construct to define and organize telehealth measures in terms of health outcomes, health delivery quality and costs, individual experiences, and program implementation and benchmarks. When evaluating neonatal telemedicine use, stakeholders and program directors should undertake efforts to identify actionable measures under each domain .
The authors have no conflicts of interest to declare.
|
Neuropathologist-level integrated classification of adult-type diffuse gliomas using deep learning from whole-slide pathological images | 7314d5a0-ffec-4da8-8c64-fd4cccdd6c69 | 10567721 | Pathology[mh] | Diffuse gliomas, which account for the majority of malignant brain tumors in adults, comprise astrocytoma, oligodendroglioma, and glioblastoma , . The prognosis of diffuse gliomas varies, with median survival being 60–119 months in oligodendroglioma, 18–36 months in astrocytoma, and 8 months in glioblastoma . The fifth edition of the World Health Organization (WHO) Classification of Tumors of the Central Nervous System (CNS) released in 2021 has categorized adult-type diffuse gliomas into three types: (1) astrocytoma, isocitrate dehydrogenase (IDH)-mutant, (2) oligodendroglioma, IDH-mutant, and 1p/19q-codeleted, and (3) glioblastoma, IDH-wildtype (short for A, O, and GBM) . This newest edition has combined not only established histological diagnosis but also molecular markers for achieving an integrated classification of adult diffuse gliomas , . In a clinical scenario, integrated diagnosis by combining histological and molecular features of glioma is a time-consuming and laborious procedure, as well as an economically expensive examination for patients. On one hand, microscopic diagnosis requires experienced pathologists’ exhaustive scrutiny of hematoxylin and eosin-stained (H&E) slides. Moreover, histological diagnosis of glioma is subjected to interobserver variation, and routine review of histological diagnosis by multiple pathologists is recommended , . On the other hand, molecular diagnosis necessitates invasive surgical resection/biopsy for glioma tissue followed by Sanger sequencing and fluorescence in situ hybridization (FISH) , which are not always available in routine examinations of many medical centers. The development of digitized scanners allows glass slides to be translated into whole-slide images (WSIs), which offers an opportunity for image analysis algorithms to achieve automatic and unbiased computational pathology. Most existing WSI-based diagnosis models adopt a deep-learning technique named convolutional neural network (CNN) for image recognition – . For glioma, several pathological CNN models have been proposed, such as a grading model trained on a small public dataset to distinguish glioblastoma and lower-grade glioma , a diagnostic platform developed on 323 patients to classify five subtypes according to the 2007 WHO criteria , a model trained on The Cancer Genome Atlas dataset to classify the three major types of glioma based on the 2021 WHO standard , and a histopathological auxiliary system for classification of brain tumors . However, a WSI diagnostic model for detailed classification of adult-type diffuse glioma strictly according to the 2021 WHO rule is still in demand. Previous evidence has shown histopathological image features in glioma are associated with specific molecular alterations such as the IDH mutation – . However, as each genotype may share overlapping histological features on H&E sections (e.g., IDH-wildtype and IDH-mutant tumors), developing an integrated diagnosis model directly from WSI to classify the 2021 WHO types that combine both pathological and molecular features is still challenging. Furthermore, there are unique challenges in CNN diagnosis using WSIs due to their gigapixel-level resolution, which makes original CNN computationally impossible. To tackle this obstacle, a WSI can be tiled into many small patches, from which a subset of cancerous patches can be selected from manually annotated pixel-level regions of interest (ROI). To avoid the heavy burden of manual annotation, weakly supervised learning techniques were applied to train WSI-CNNs with slide- or patch-level coarse labels such as cancer or non-cancer , – . In this work, we propose a neuropathologist-level integrated diagnosis model for automatically predicting 2021 WHO types and grades of adult-type diffuse gliomas from annotation-free standard WSIs. The model avoids the annotation burden by using patient-level tumor types directly as weak supervision labels while exploiting the type-discriminative patterns by leveraging a feature domain clustering. The integrated diagnosis model is developed and externally tested using 2624 patients with adult-type diffuse gliomas from three hospitals. All datasets have integrated histopathological and molecular information strictly required for 2021 WHO classification. Our study provides an integrated diagnosis model for automated and unbiased classification of adult-type diffuse gliomas. Overview and patient characteristics There were three datasets included in this study: Dataset 1 contained 1991 consecutive patients from the First Affiliated Hospital of Zhengzhou University (FAHZZU), Dataset 2 contained 305 consecutive patients from Henan Provincial People’s Hospital (HPPH), and Dataset 3 contained 328 consecutive patients from Xuanwu Hospital Capital Medical University (XHCMU). The selection pipeline was shown in Fig. . Therefore, a total of 2624 patients were included in this study as the study dataset (mean age, 50.97 years ± 13.04 [standard deviation]; 1511 male patients), including 503 A, 445 O, and 1676 GBM (Fig. ). The study dataset comprised a training cohort ( n = 1362, mean age, 50.66 years ± 12.91; 787 men) from FAHZZU, a validation cohort ( n = 340, mean age, 50.81 years ± 12.33; 195 men) from FAHZZU, an internal testing cohort ( n = 289, mean age, 50.25 years ± 13.08; 172 men) from FAHZZU, an external testing cohort 1 ( n = 305, mean age, 52.46 years ± 12.82; 171 men) from HPPH, and external testing cohort 2 ( n = 328, mean age, 50.82 years ± 14.25; 186 men) from XHCMU. The datasets were described in detail in Supplementary Methods A . The clinical characteristics and integrated pathological diagnosis of the four cohorts are summarized in Supplementary Table . The detailed protocols for molecular testing are described in Supplementary Methods A –A . Representative results of IDH1/IDH2 mutations, 1p/19q deletions, CDKN2A homozygous deletion, EGFR amplification, and Chromosome 7 gain/Chromosome 10 loss are depicted in Supplementary Figs. – . The integrated classification pipeline according to the 2021 WHO rule was shown in Fig. and described in Supplementary Methods A . There was no significant difference in type, grade, gender, age, and IDH mutation status among the training cohort, internal validation cohort, and internal testing cohort (two-sided Wilcoxon test or Chi-square test P -value > 0.05). Patch clustering-based integrated diagnosis model building To select a subset of discriminative patches from a WSI, we clustered the patches based on their phenotypes and distinguished the more discriminative ones. The pipeline consisted of four steps: patch clustering, patch selection, patch-level classification, and patient-level classification, as shown in Fig. . The clustering process can be found in Supplementary Methods A . The CNN architecture and training parameters for patch selection were described in Supplementary Methods A . In the training cohort, 644,896 patches were extracted in total. Using a subset of 43653 patches from 100 randomly selected patients in the training cohort, a K -means clustering model was developed, where both the silhouette coefficient and the Calinski-Harabasz index reached their highest value at the optimal cluster number of nine, as shown in Fig. . Using the K -mean algorithm, all 644,896 patches from the training cohort were partitioned into nine clusters. Correspondingly, nine separate patch-level CNN classifiers were obtained, and their patch-level accuracy in classifying the six categories was shown in Fig. . Among them, three classifiers trained on cluster 2,5,7 had higher accuracy than the benchmark classifier (shown by the green bar in Fig. ). Therefore, the three clusters containing 275,741 patches in training cohort were selected for building the final patch-level classifier. The clustering results for three representative patients are shown in Fig. . It showed the patch heterogeneity across clusters, implying the capability of the clustering-based method in distinguishing different image patterns. The tumor classification performance of the patch-level classifier built on the three selected clusters in each cohort is shown in Supplementary Fig. . Classification performance of the integrated diagnosis model The diagnostic model was obtained by aggregating the patch-level classifications into patient-level results. We first showed the patient-level cross-validation results. The ROC curves for each fold and the mean ROC curves over all folds for classifying the six categories on the validation cohort were shown in Supplementary Fig. . The boxplots of AUCs in all folds were shown in Supplementary Fig. . The results demonstrated the model stability across different folds. Next, we assessed the performance of the best model (the fifth model, corresponding to ROC curves for fold 5 in Supplementary Fig. ) selected in cross-validation on multiple testing cohorts. In classifying the six categories (task 1) of A Grade 2, A Grade 3, A Grade 4, O Grade 2, O Grade 3, and GBM Grade 4 (short for A2, A3, A4, O2, O3, and GBM), the model achieved corresponding AUCs of 0.959, 0.995, 0.953, 0.978, 0.982, 0.960 on internal validation cohort, 0.970, 0.973, 0.994, 0.932, 0.980, 0.980 on internal testing cohort, 0.934, 0.923, 0.987, 0.964, 0.978, 0.984 on external testing cohort 1, and 0.945, 0.944, 0.904, 0.942, 0.950, 0.952 on external testing cohort 2, respectively, as shown in Fig. and Table . In classifying the three types of A, O, and GBM while neglecting grades (task 2), the model achieved corresponding AUCs of 0.961, 0.974 and 0.960 on internal validation cohort, 0.969, 0.974, 0.980 on internal testing cohort, and 0.938, 0.973 and 0.983 on external testing cohort 1, and 0.941, 0.938 and 0.952 on external testing cohort 2, respectively, as shown in Fig. and Table . The PR curves of the diagnostic model related to task 1 and task 2 were shown in Supplementary Fig. , demonstrating the model performance in this data imbalance problem. Considering that IDH-wildtype diffuse astrocytic tumors without the histological features of glioblastoma but with TERT promoter mutations, EGFR amplification, or Chromosome 7 gain/Chromosome 10 loss (classified as glioblastomas in 2021 standard) may share similar histological features with the IDH-mutant Grade 2–3 astrocytoma, we also assessed the model’s ability in classifying these two categories (task 3). In these two subgroups, our model achieved high performance with AUCs ranging from 0.935 to 0.984 in all cohorts, as shown in Fig. and Table . On the other hand, the IDH-mutant glioblastoma in the 2016 WHO classification is classified as IDH-mutant astrocytoma grade 4 in the 2021 WHO classification, which may share similar histological features such as microvascular proliferation with IDH-wildtype glioblastoma. Our model also achieved good performance in distinguishing these two subgroups with AUCs ranging from 0.943 to 0.998 on all cohorts, as shown in Fig. and Table (task 4). Furthermore, we assessed the model performance in classifying tumor grades within the type. In classifying A2, A3, and A4 within the IDH-mutant astrocytoma subgroup (task 5), the model achieved high AUCs ranging from 0.907 to 0.998 across all grades on all cohorts, as shown in Supplementary Fig. and Table . In classifying O2 and O3 within the oligodendroglioma subgroup (task 6), the model maintained high AUCs ranging from 0.928 to 0.989 on all cohorts, as shown in Supplementary Fig. and Table . Moreover, we also assessed the performance in distinguishing IDH-mutant diffuse astrocytoma with IDH-mutant 1p/19q-codeleted oligodendroglioma (task 7), achieving subgroup AUCs ranging from 0.957 to 0.994 on all cohorts, as shown in Supplementary Fig. and Table . Comparison with other classification models The performance of the proposed clustering-based model was further compared with four previous models, a weakly supervised classical multiple-instance learning (MIL) model , , an attention-based MIL (AMIL) model , a clustering-constrained-attention MIL (CLAM) , and the all-patch classification model. The AUCs of the classical MIL model and the all-patch model on all cohorts ranged from 0.793 to 0.997 in classifying the six categories (task 1) while ranged from 0.894 to 0.981 in classifying the three major types (task 2), as shown in Supplementary Figs. and and Supplementary Data and . The two advanced methods, AMIL and CLAM, did not show significant improvement in AUCs in tasks 1 and 2 compared with classical MIL, as shown in Supplementary Figs. and , respectively. The AUCs of all five models were summarized in Supplementary Table . The results of the Delong analysis between the AUCs of the clustering-based model and other models were summarized in Supplementary Table . In classifying tumor grades within types, distinguishing IDH-mutant astrocytoma with IDH-mutant 1p/19q-codeleted oligodendroglioma, and distinguishing IDH-mutant astrocytoma with astrocytoma-like IDH-wildtype glioblastoma, the performance of the MIL model and the all-patch model was summarized in Supplementary Figs. and and Supplementary Data and (tasks 3−7). Among the five models, the MIL model and its two variants were numerically inferior to or comparable with the clustering-based model, while the all-patch model lagged the other four models in all tasks. As shown in Supplementary Table , on most datasets the difference in AUCs between the clustering-based model and each of the three MIL models was not significant (Delong P > 0.05) in classifying the six tumor types (task 1). In classifying the three types (task 2), the AUC of the clustering-based model was significantly higher than that of the all-patch model on all testing datasets (Delong P < 0.05). Interpretation of the CNN classification To visualize and interpret the relative importance of different regions in classifying the tumors, the class activation maps (CAM) along with the corresponding patches and WSIs from ten representative patients were shown in Fig. . The CAM highlighted in red which regions contributed most to the classification task. These highlighted regions were then evaluated and interpreted from neuropathologist’s perspectives. As shown in Fig. , the ten examples were assigned to five groups, where the two examples in each group shared the same grades or histological features. This human-readable CAM indicated that the classification basis of the clustering-based model generally aligned with pathological morphology well recognized by pathologists. For example, in distinguishing O2 from A2 or O3 from A3, our model generally highlighted morphological characteristics of oligodendrocytes/astrocytes, which were consistent with human expertize. We also observed that in classifying cases with shared histological features including necrosis and microvascular proliferation, features that might reflect underlying IDH mutations and CDKN2A homozygous deletion can be captured by our model. These features may offer potential predictive value and might be useful in assisting human readers in achieving more accurate diagnoses. There were three datasets included in this study: Dataset 1 contained 1991 consecutive patients from the First Affiliated Hospital of Zhengzhou University (FAHZZU), Dataset 2 contained 305 consecutive patients from Henan Provincial People’s Hospital (HPPH), and Dataset 3 contained 328 consecutive patients from Xuanwu Hospital Capital Medical University (XHCMU). The selection pipeline was shown in Fig. . Therefore, a total of 2624 patients were included in this study as the study dataset (mean age, 50.97 years ± 13.04 [standard deviation]; 1511 male patients), including 503 A, 445 O, and 1676 GBM (Fig. ). The study dataset comprised a training cohort ( n = 1362, mean age, 50.66 years ± 12.91; 787 men) from FAHZZU, a validation cohort ( n = 340, mean age, 50.81 years ± 12.33; 195 men) from FAHZZU, an internal testing cohort ( n = 289, mean age, 50.25 years ± 13.08; 172 men) from FAHZZU, an external testing cohort 1 ( n = 305, mean age, 52.46 years ± 12.82; 171 men) from HPPH, and external testing cohort 2 ( n = 328, mean age, 50.82 years ± 14.25; 186 men) from XHCMU. The datasets were described in detail in Supplementary Methods A . The clinical characteristics and integrated pathological diagnosis of the four cohorts are summarized in Supplementary Table . The detailed protocols for molecular testing are described in Supplementary Methods A –A . Representative results of IDH1/IDH2 mutations, 1p/19q deletions, CDKN2A homozygous deletion, EGFR amplification, and Chromosome 7 gain/Chromosome 10 loss are depicted in Supplementary Figs. – . The integrated classification pipeline according to the 2021 WHO rule was shown in Fig. and described in Supplementary Methods A . There was no significant difference in type, grade, gender, age, and IDH mutation status among the training cohort, internal validation cohort, and internal testing cohort (two-sided Wilcoxon test or Chi-square test P -value > 0.05). To select a subset of discriminative patches from a WSI, we clustered the patches based on their phenotypes and distinguished the more discriminative ones. The pipeline consisted of four steps: patch clustering, patch selection, patch-level classification, and patient-level classification, as shown in Fig. . The clustering process can be found in Supplementary Methods A . The CNN architecture and training parameters for patch selection were described in Supplementary Methods A . In the training cohort, 644,896 patches were extracted in total. Using a subset of 43653 patches from 100 randomly selected patients in the training cohort, a K -means clustering model was developed, where both the silhouette coefficient and the Calinski-Harabasz index reached their highest value at the optimal cluster number of nine, as shown in Fig. . Using the K -mean algorithm, all 644,896 patches from the training cohort were partitioned into nine clusters. Correspondingly, nine separate patch-level CNN classifiers were obtained, and their patch-level accuracy in classifying the six categories was shown in Fig. . Among them, three classifiers trained on cluster 2,5,7 had higher accuracy than the benchmark classifier (shown by the green bar in Fig. ). Therefore, the three clusters containing 275,741 patches in training cohort were selected for building the final patch-level classifier. The clustering results for three representative patients are shown in Fig. . It showed the patch heterogeneity across clusters, implying the capability of the clustering-based method in distinguishing different image patterns. The tumor classification performance of the patch-level classifier built on the three selected clusters in each cohort is shown in Supplementary Fig. . The diagnostic model was obtained by aggregating the patch-level classifications into patient-level results. We first showed the patient-level cross-validation results. The ROC curves for each fold and the mean ROC curves over all folds for classifying the six categories on the validation cohort were shown in Supplementary Fig. . The boxplots of AUCs in all folds were shown in Supplementary Fig. . The results demonstrated the model stability across different folds. Next, we assessed the performance of the best model (the fifth model, corresponding to ROC curves for fold 5 in Supplementary Fig. ) selected in cross-validation on multiple testing cohorts. In classifying the six categories (task 1) of A Grade 2, A Grade 3, A Grade 4, O Grade 2, O Grade 3, and GBM Grade 4 (short for A2, A3, A4, O2, O3, and GBM), the model achieved corresponding AUCs of 0.959, 0.995, 0.953, 0.978, 0.982, 0.960 on internal validation cohort, 0.970, 0.973, 0.994, 0.932, 0.980, 0.980 on internal testing cohort, 0.934, 0.923, 0.987, 0.964, 0.978, 0.984 on external testing cohort 1, and 0.945, 0.944, 0.904, 0.942, 0.950, 0.952 on external testing cohort 2, respectively, as shown in Fig. and Table . In classifying the three types of A, O, and GBM while neglecting grades (task 2), the model achieved corresponding AUCs of 0.961, 0.974 and 0.960 on internal validation cohort, 0.969, 0.974, 0.980 on internal testing cohort, and 0.938, 0.973 and 0.983 on external testing cohort 1, and 0.941, 0.938 and 0.952 on external testing cohort 2, respectively, as shown in Fig. and Table . The PR curves of the diagnostic model related to task 1 and task 2 were shown in Supplementary Fig. , demonstrating the model performance in this data imbalance problem. Considering that IDH-wildtype diffuse astrocytic tumors without the histological features of glioblastoma but with TERT promoter mutations, EGFR amplification, or Chromosome 7 gain/Chromosome 10 loss (classified as glioblastomas in 2021 standard) may share similar histological features with the IDH-mutant Grade 2–3 astrocytoma, we also assessed the model’s ability in classifying these two categories (task 3). In these two subgroups, our model achieved high performance with AUCs ranging from 0.935 to 0.984 in all cohorts, as shown in Fig. and Table . On the other hand, the IDH-mutant glioblastoma in the 2016 WHO classification is classified as IDH-mutant astrocytoma grade 4 in the 2021 WHO classification, which may share similar histological features such as microvascular proliferation with IDH-wildtype glioblastoma. Our model also achieved good performance in distinguishing these two subgroups with AUCs ranging from 0.943 to 0.998 on all cohorts, as shown in Fig. and Table (task 4). Furthermore, we assessed the model performance in classifying tumor grades within the type. In classifying A2, A3, and A4 within the IDH-mutant astrocytoma subgroup (task 5), the model achieved high AUCs ranging from 0.907 to 0.998 across all grades on all cohorts, as shown in Supplementary Fig. and Table . In classifying O2 and O3 within the oligodendroglioma subgroup (task 6), the model maintained high AUCs ranging from 0.928 to 0.989 on all cohorts, as shown in Supplementary Fig. and Table . Moreover, we also assessed the performance in distinguishing IDH-mutant diffuse astrocytoma with IDH-mutant 1p/19q-codeleted oligodendroglioma (task 7), achieving subgroup AUCs ranging from 0.957 to 0.994 on all cohorts, as shown in Supplementary Fig. and Table . The performance of the proposed clustering-based model was further compared with four previous models, a weakly supervised classical multiple-instance learning (MIL) model , , an attention-based MIL (AMIL) model , a clustering-constrained-attention MIL (CLAM) , and the all-patch classification model. The AUCs of the classical MIL model and the all-patch model on all cohorts ranged from 0.793 to 0.997 in classifying the six categories (task 1) while ranged from 0.894 to 0.981 in classifying the three major types (task 2), as shown in Supplementary Figs. and and Supplementary Data and . The two advanced methods, AMIL and CLAM, did not show significant improvement in AUCs in tasks 1 and 2 compared with classical MIL, as shown in Supplementary Figs. and , respectively. The AUCs of all five models were summarized in Supplementary Table . The results of the Delong analysis between the AUCs of the clustering-based model and other models were summarized in Supplementary Table . In classifying tumor grades within types, distinguishing IDH-mutant astrocytoma with IDH-mutant 1p/19q-codeleted oligodendroglioma, and distinguishing IDH-mutant astrocytoma with astrocytoma-like IDH-wildtype glioblastoma, the performance of the MIL model and the all-patch model was summarized in Supplementary Figs. and and Supplementary Data and (tasks 3−7). Among the five models, the MIL model and its two variants were numerically inferior to or comparable with the clustering-based model, while the all-patch model lagged the other four models in all tasks. As shown in Supplementary Table , on most datasets the difference in AUCs between the clustering-based model and each of the three MIL models was not significant (Delong P > 0.05) in classifying the six tumor types (task 1). In classifying the three types (task 2), the AUC of the clustering-based model was significantly higher than that of the all-patch model on all testing datasets (Delong P < 0.05). To visualize and interpret the relative importance of different regions in classifying the tumors, the class activation maps (CAM) along with the corresponding patches and WSIs from ten representative patients were shown in Fig. . The CAM highlighted in red which regions contributed most to the classification task. These highlighted regions were then evaluated and interpreted from neuropathologist’s perspectives. As shown in Fig. , the ten examples were assigned to five groups, where the two examples in each group shared the same grades or histological features. This human-readable CAM indicated that the classification basis of the clustering-based model generally aligned with pathological morphology well recognized by pathologists. For example, in distinguishing O2 from A2 or O3 from A3, our model generally highlighted morphological characteristics of oligodendrocytes/astrocytes, which were consistent with human expertize. We also observed that in classifying cases with shared histological features including necrosis and microvascular proliferation, features that might reflect underlying IDH mutations and CDKN2A homozygous deletion can be captured by our model. These features may offer potential predictive value and might be useful in assisting human readers in achieving more accurate diagnoses. In this study, we presented a CNN-based integrated diagnosis model that was capable of automatically classifying adult-type diffuse gliomas according to the 2021 WHO standard from annotated-free WSIs. We compiled a large dataset including 2624 patients with both histological and molecular information. Extensive validation and comparative studies confirmed the accuracy and generalization ability of our model. Compared to previous work, our research had several strengths by addressing the key challenges in computational pathology: (1) The deep-learning model can be trained with only tumor types as weakly supervised labels by using a patch clustering technique, which obviated the burden of pixel-level or patch-level annotations. (2) Using only pathological images, our model enables high-performance integrated diagnosis that traditionally requires combining pathological and molecular information. This was made possible through a clustering-based CNN that can learn imaging features containing both pathological morphology and underlying biological clues. (3) Using a large training dataset including 644896 patch images from 1362 patients, our model can generalize to an internal testing cohort and two external testing cohorts, with strong performance in classifying major types, grades within type, and especially in distinguishing genotypes with shared histological features. Several WSI CNN models have been developed for predicting histological grades according to the 2007 WHO classification in patients with glioma , , , . For instance, Ertosun et al. applied CNN to perform binary classification between glioblastoma and lower-grade glioma with an accuracy of 96%, and between grade II and III glioma with an accuracy of 71% . Jin et al. presented a diagnostic platform to classify five major categories considering both histological grades and molecular makers based on 323 patients, with an accuracy of 87.5% . However, to date, there are no CNN-based integrated diagnostic models strictly according to the 2021 WHO classification, which introduces substantial changes compared to previous editions. Jose L et al. developed a CNN model using The Cancer Genome Atlas dataset to classify three types of gliomas considering two molecular markers (IDH mutation and 1p/19q codeletion) based on the 2021 WHO standard, with an accuracy of 86.1% and an AUC of 0.961 . Our CNN model is the one that can classify gliomas into six types strictly adhering to the 2021 rule. To achieve this, we collected a much larger dataset and performed the integrated diagnosis for each patient according to the 2021 WHO criteria, where more comprehensive molecular information including IDH mutation, 1p/19q codeletion, CDKN2A homozygous deletion, TERT promoter mutation, EGFR amplification, and Chromosome 7 gain/Chromosome 10 loss were obtained to determine the types. To emphasize the integrated diagnosis, the 2021 edition introduces a new “grades within type” classification system, where both grades and types are determined by combining histological and molecular information. In our study, we predicted the tumor grades/types directly from pathological images, and no molecular information was fed into the model. This implies that our model can learn molecular characteristics from pathological images to achieve an integrated diagnosis. Several studies have also shown the ability of CNN to recognize the genetic alterations directly from WSI, such as mutation detection , – , , , microsatellite instability prediction , and pan-cancer genetic profiling , . In a recent study on CNN-based pathological diagnosis , the glioma classification was extended from three histological grades to five categories by adding the IDH and 1p/19q status. However, it is not strict WHO-consistent integrated classification, and the dataset with molecular information is relatively small ( n = 296). Generally, these studies indicated a potential link between the tumor’s histopathological morphology and underlying molecular composition. Our clustering-based CNN model dedicated to learning the most representative features from the entire WSI had two major advantages. First, it avoided the need for any manual annotation by automatically selecting several type-relevant patch clusters that contributed more to the integrated classification task. Second, it aggregated local features to reach a global diagnosis by selectively fusing the most discriminative information from multiple relevant patches. Traditionally, manual annotation is required to delineate cancerous regions of interest for CNN training . However, the manual delineation is always time-consuming and subjective. To avoid pixel-level annotation, weakly supervised methods were developed where experts can assign a label to an image. Among them, MIL and its variants employing a “bag learning” strategy have been widely used in WSI classification , . Our study compared the presented clustering approach with the classical MIL and its two variants, the AMIL and CLAM , demonstrating the superior performance of our approach in classifying the six integrated types, the three histological categories, and the grades within each type. Especially, our clustering model also achieved high performance in classifying several histologically similar subgroups, i.e., IDH-mutated vs. IDH wildtype tumors with similar morphology, and 1p/19q codeleted vs. 1p/19q non-codeleted tumors with similar morphology. These new classifications are also the major changes introduced by the 2021 WHO rule. Furthermore, the attention mechanism incorporated in both AMIL and CLAM did not seem to bring as much benefit as expected. One reason might be the high degree of variability and complexity within the pathologic data, making it hard to learn effective attention weights for instances related to the target classes. Specifically, to classify the six types according to the 2021 WHO rule, the model needs to identify discriminative morphology related to histologic types (A, O, and GBM) and grades within types (A2/3/4, O2/3), and tumor genotypes with shared histologic features (e.g., IDH-wildtype and -mutant tumors). Furthermore, some key instances might be sparse (microvascular proliferation or necrosis). The discriminative features might be contained in the same instances, in many different instances, or in sparse instances. These key instances may be too diverse and complex to be recognized by an attention mechanism. Moreover, we guess that the label noise induced by the simplified slide-to-patch label assignment would also impair the attention weights to some extent. Instead of emphasizing key patches, we turned to searching for important patch clusters with similar imaging phenotypes. Our data as well as the CAM visualization suggested the capability of the clustering-based model in recognizing not only pathological morphology such as microvascular proliferation and necrosis useful for histological classification, but also imaging patterns reflecting underlying genomic alterations useful for the integrated diagnosis. Despite the encouraging results, three limitations should be pointed out. First, despite our dataset comprises of a sample size of 2624 patients from three hospitals, future international multicenter and multiracial dataset of a larger sample size is welcomed. Second, in our study, all slides from three hospitals were scanned using the same digital scanner to ensure consistency. To address the impact of scanner variability and develop a classifier with good robustness in clinical practice, we plan to collect a larger dataset of WSIs obtained from a variety of scanners. Advanced stain normalization may be required to enhance the model’s robustness. We will also assess the impact of different stain normalization methods, as the variations in stain intensity may affect the performance of deep-learning models. Third, more preclinical experimental work in genome, transcriptome, proteome, and animal level is needed to further elucidate the biological interpretability of the deep-learning model. In conclusion, our data suggested that the presented CNN model can achieve high-performance fully automated integrated diagnosis that adheres to the 2021 WHO classification from annotation-free WSI. Our model has the potential to be used in clinical scenarios for unbiased classification of adult-type diffuse gliomas. Patients and datasets This study was a part of the registered clinical trial (ClinicalTrials ID: NCT04217044). This study was approved by the Human Scientific Ethics Committee of the First Affiliated Hospital of Zhengzhou University (FAHZZU), Henan Provincial People’s Hospital (HPPH), and Xuanwu Hospital Capital Medical University (XHCMU). Informed consent and participant compensation were waived by the Committee due to the retrospective and anonymous analysis. There were three datasets included in this study: Dataset 1 contained 1991 consecutive patients from FAHZZU, Dataset 2 contained 305 consecutive patients from HPPH, and Dataset 3 contained 328 consecutive patients from XHCMU. Dataset 1 includes three cohorts: a (1) training cohort ( n = 1362, from FAHZZU) used to develop the glioma type/grade classification model, a (2) validation cohort ( n = 340, from FAHZZU) used to optimize the model, and a (3) internal testing cohort ( n = 289, form FAHZZU) used to test the model. The training and validation cohorts were selected with stratified random sampling from the FAHZZU patient set collected from January 2011 to December 2019 at a ratio of 4:1, where the clinical parameters between both cohorts were balanced. We repeated this procedure in a five-fold cross-validation approach, re-assigning the patients into training and validation cohorts five times. Patients from FAHZZU between January 2020 and December 2020 were used as the internal testing cohort. Dataset 2 was used as an external testing cohort 1, and dataset 3 was used as an external testing cohort 2. The datasets were described in detail in Supplementary Methods A . The inclusion criteria are as follows: (1) adult patients (>18 years) surgically treated and pathologically diagnosed as diffuse gliomas (WHO Grade 2–4), (2) availability of clinical, histological, and molecular data, (3) availability of sufficient formalin-fixed, paraffin-embedded (FFPE) tumor tissues for testing for molecular markers in the 2021 WHO classification of adult-type diffuse gliomas, (4) availability of H&E slides for scanning as digitalized WSIs, (4) sufficient image quality of digitalized WSIs. The selection pipeline is shown in Fig. . Determination of WHO classification In the last 5 years since the publication of the 2016 Edition of the WHO CNS, the development of targeted sequencing and omics techniques has helped neuro-oncologists gradually establish some new tumor types in clinical practice, as well as a series of molecular markers. Based on 7 updates at the Consortium to Inform Molecular and Practical Approaches to CNS Tumor Taxonomy (cIMPACT-NOW), the International Agency for Research on Cancer (IARC) has finally released the 5th edition of the WHO Classification of Tumors of the CNS. According to cIMPACT-NOW update 3 , despite appearing histologically as grade II and III, IDH-wildtype diffuse astrocytic gliomas that contain high-level EGFR amplification (excluding low-level EGFR copy number gains, e.g., trisomy 7), or whole chromosome 7 gain and whole chromosome 10 loss (+7/−10), or TERT promoter mutations, correspond to WHO grade IV and should be referred to as diffuse astrocytic glioma, IDH-wildtype, with molecular features of glioblastoma, WHO grade 4. According to cIMPACT-NOW update 5 , diffusely infiltrative astrocytic glioma with an IDH1 or IDH2 mutation that exhibits microvascular proliferation or necrosis or CDKN2A/B homozygous deletion or any combination of these features should be referred to as Astrocytoma, IDH-mutant, WHO grade 4. Thus, in 5th edition of the WHO CNS, adult-type diffuse gliomas are divided into (1) Astrocytoma, IDH-mutant, Grade 2,3,4; (2) Oligodendroglioma, IDH-mutant, and 1p/19q-codeleted, Grade 2,3 and (3) Glioblastoma, IDH-wildtype, Grade 4 (A2, A3, A4, O2, O3, and GBM) . Therefore, in our study, formalin-fixed, paraffin-embedded (FFPE) tissues were used for the detection of ATRX by immunohistochemistry (IHC), and for detection of mutational hotspots in IDH1/IDH2 and TERT promoter by Sanger sequencing, as well as for detection of Chromosome 1p/19q, CDKN2A, EGFR and chromosome 7/10 status by fluorescence in situ hybridization (FISH). The detailed protocols are described in Supplementary Methods A and A . The integrated classification pipeline according to the 2021 WHO rule is shown in Fig. and described in Supplementary Methods A . WSI data acquisition and preprocessing The slides were scanned using the MAGSCAN-NER scanner (KF-PRO-005, KFBIO) to obtain the WSI. In our study, one patient had one WSI. As tissues generally occupy a portion of the slide with large areas of white background space in a WSI, tissue segmentation should be performed first. The WSI at the 5× resolution was transformed from RGB to Lab color space and the tissue was segmented with a threshold value calculated using the OSTU algorithm. The segmented tissue image was divided into many 1024 × 1024 patches at 20 × objective magnifications (0.5 microns per pixel). The patches were adjacent to one another covering the entire WSI. From all 2624 patients, a total of 1292420 patches were extracted, as shown in Fig. . The number of patches in different WSIs varied from hundreds to more than 2000. Each WSI belonged to one of the six categories: A2, A3, A4, O2, O3, and GBM. This patient-level label was also assigned to each patch within one WSI. All classifiers in the following were trained to predict the six tumor types. Integrated diagnosis model building We aimed to find a subset of discriminative patches from a WSI. Considering that a group of patches may share similar imaging patterns or phenotypes, we clustered the patches based on their phenotypes and distinguished the clusters with better discriminative power. The pipeline consisted of four steps: patch clustering, patch selection, patch-level classification, and patient-level classification, as shown in Fig. . Patch clustering First, the patch clustering algorithm was trained using 43653 candidate patches from 100 randomly selected patients in the training cohort, including 11 A2, 2 A3, 2 A4, 14 O2, 3 O3, and 68 GBM patients. Considering that the original image may not present type-relevant cancer phenotypes, we chose to cluster the patches in the feature domain. The patches were resized into 256 × 256 and were fed into a pre-trained CNN for deep feature extraction. Here a ResNet-50 trained with patch-level labels (six categories) on all patches in the training cohort was used as the CNN feature extractor (referred to as all-patch classifier). Using this trained ResNet-50, 2048 deep features can be extracted from the averaging pooling layer for each patch. Based on the features, the candidate 43,653 patches for the 100 patients were used to develop a K -means clustering algorithm by partitioning these patches into K clusters, where the optimal cluster number K was determined using the silhouette coefficient. The Calinski-Harabasz index was also used to additionally assess the clustering quality. The patches in different clusters were considered to have discriminative imaging patterns related to cancer types. The clustering process can be found in Supplementary Methods A . Patch selection Using the established K -means clustering algorithm, all patches from each patient in the training cohort were partitioned into K clusters. Next, K separate patch-level CNN classifiers were trained on the K patch clusters for all patients in the training cohort respectively, where the ResNet-50 was used as the CNN architecture and the training parameters were the same as used in the all-patch classifier. The K clusters obtained in the validation cohort were used to optimize the K corresponding classifiers. The K cluster-based classifiers may have different powers in classifying the tumor types. Here we used the all-patch classifier as a performance benchmark. For each patient, the clusters with better classification accuracy than the benchmark were selected for further analysis. The CNN architecture and training parameters were detailed in Supplementary Methods A . Patch-level classification Using the patches from all selected clusters, a patch-level ResNet-50 model was trained on the training cohort while optimized on the validation cohort. The same training parameters were used. This network was used to provide an estimation of the tumor types for each input image patch. Next, we should aggregate the patch-level estimations to make a final patient-level prediction. Patient-level classification The patch-level predictions were aggregated to determine the types of the entire WSI using a majority voting approach. Specifically, the class to which the maximum number of patches belonged was used as the final patient-level prediction. This aggregation approach can reduce the bias of patch-level prediction. Model selection To assess the model’s robustness and to select an optimal model, we repeated the training/validation cohort division procedure five times using five-fold cross-validation. In each repetition, the training and validation sets were divided using stratified random resampling with patient characteristics balanced between both sets. During the cross-validation process, the model was trained for a minimum of 50 epochs. Then, the loss on the validation set was computed in each epoch, where the model with the lowest average validation loss over 10 consecutive epochs was saved. If such a model was not found, the training continued up to a maximum of 150 epochs. Finally, the patient-level model with the best-averaging performance across all folds was selected as the proposed diagnostic model. Statistical analysis Statistical analysis was performed using Python (Version 3.6.1). P -value < 0.05 was considered significant. All data analysis was performed using Python 3.6.1. Specifically, the packages or software comprised PyTorch 1.10.0 for model training and testing, CUDA 11.6 and cuDNN 8.1.0.77 for GPU acceleration, and scikit-learn 1.0.2 for statistical analysis. All CNNs were trained on two NVIDIA Tesla V100 GPUs. The difference in patient characteristics between training and the other cohorts was assessed by a two-sided Wilcoxon test or Chi-square test. The patch-level classifiers were trained on the training cohort and optimized on the validation cohort. The performance of the optimal patient-level classifiers in five-fold cross-validation was further tested on the internal testing cohort and two external testing cohorts. Receiver operating characteristic (ROC) analysis was used for performance evaluation in terms of area under the ROC curve (AUC), accuracy, sensitivity, specificity, and F1-score in classifying the six categories A2, A3, A4, O2, O3, and GBM. These metrics were calculated using a one-vs.-rest approach in the multi-class problem. The average AUC over the six categories on the validation cohort for each fold was used to select the best model in cross-validation. To address the class imbalance problem, the precision-recall (PR) curves were also calculated to comprehensively assess the model performance. In addition, the performance of the clustering-based model was compared with another four models, a weakly supervised classical multiple-instance learning (MIL) model , , an attention-based MIL (AMIL) model , a clustering-constrained-attention MIL (CLAM) , and the all-patch classification model. Briefly, in MIL the patches with the highest score (that were most likely to be cancerous) were selected for diagnosis model building. AMIL and CLAM were two variants of MIL, where the former learned to emphasize the patches related to the target classes while the latter extended AMIL to a general multi-class with a refined feature space. As described before, the all-patch model used all patches for classification without patch selection. The statistical difference between AUCs was compared using DeLong analysis. Reporting of the study adhered to the STARD guideline . Reporting summary Further information on research design is available in the linked to this article. This study was a part of the registered clinical trial (ClinicalTrials ID: NCT04217044). This study was approved by the Human Scientific Ethics Committee of the First Affiliated Hospital of Zhengzhou University (FAHZZU), Henan Provincial People’s Hospital (HPPH), and Xuanwu Hospital Capital Medical University (XHCMU). Informed consent and participant compensation were waived by the Committee due to the retrospective and anonymous analysis. There were three datasets included in this study: Dataset 1 contained 1991 consecutive patients from FAHZZU, Dataset 2 contained 305 consecutive patients from HPPH, and Dataset 3 contained 328 consecutive patients from XHCMU. Dataset 1 includes three cohorts: a (1) training cohort ( n = 1362, from FAHZZU) used to develop the glioma type/grade classification model, a (2) validation cohort ( n = 340, from FAHZZU) used to optimize the model, and a (3) internal testing cohort ( n = 289, form FAHZZU) used to test the model. The training and validation cohorts were selected with stratified random sampling from the FAHZZU patient set collected from January 2011 to December 2019 at a ratio of 4:1, where the clinical parameters between both cohorts were balanced. We repeated this procedure in a five-fold cross-validation approach, re-assigning the patients into training and validation cohorts five times. Patients from FAHZZU between January 2020 and December 2020 were used as the internal testing cohort. Dataset 2 was used as an external testing cohort 1, and dataset 3 was used as an external testing cohort 2. The datasets were described in detail in Supplementary Methods A . The inclusion criteria are as follows: (1) adult patients (>18 years) surgically treated and pathologically diagnosed as diffuse gliomas (WHO Grade 2–4), (2) availability of clinical, histological, and molecular data, (3) availability of sufficient formalin-fixed, paraffin-embedded (FFPE) tumor tissues for testing for molecular markers in the 2021 WHO classification of adult-type diffuse gliomas, (4) availability of H&E slides for scanning as digitalized WSIs, (4) sufficient image quality of digitalized WSIs. The selection pipeline is shown in Fig. . In the last 5 years since the publication of the 2016 Edition of the WHO CNS, the development of targeted sequencing and omics techniques has helped neuro-oncologists gradually establish some new tumor types in clinical practice, as well as a series of molecular markers. Based on 7 updates at the Consortium to Inform Molecular and Practical Approaches to CNS Tumor Taxonomy (cIMPACT-NOW), the International Agency for Research on Cancer (IARC) has finally released the 5th edition of the WHO Classification of Tumors of the CNS. According to cIMPACT-NOW update 3 , despite appearing histologically as grade II and III, IDH-wildtype diffuse astrocytic gliomas that contain high-level EGFR amplification (excluding low-level EGFR copy number gains, e.g., trisomy 7), or whole chromosome 7 gain and whole chromosome 10 loss (+7/−10), or TERT promoter mutations, correspond to WHO grade IV and should be referred to as diffuse astrocytic glioma, IDH-wildtype, with molecular features of glioblastoma, WHO grade 4. According to cIMPACT-NOW update 5 , diffusely infiltrative astrocytic glioma with an IDH1 or IDH2 mutation that exhibits microvascular proliferation or necrosis or CDKN2A/B homozygous deletion or any combination of these features should be referred to as Astrocytoma, IDH-mutant, WHO grade 4. Thus, in 5th edition of the WHO CNS, adult-type diffuse gliomas are divided into (1) Astrocytoma, IDH-mutant, Grade 2,3,4; (2) Oligodendroglioma, IDH-mutant, and 1p/19q-codeleted, Grade 2,3 and (3) Glioblastoma, IDH-wildtype, Grade 4 (A2, A3, A4, O2, O3, and GBM) . Therefore, in our study, formalin-fixed, paraffin-embedded (FFPE) tissues were used for the detection of ATRX by immunohistochemistry (IHC), and for detection of mutational hotspots in IDH1/IDH2 and TERT promoter by Sanger sequencing, as well as for detection of Chromosome 1p/19q, CDKN2A, EGFR and chromosome 7/10 status by fluorescence in situ hybridization (FISH). The detailed protocols are described in Supplementary Methods A and A . The integrated classification pipeline according to the 2021 WHO rule is shown in Fig. and described in Supplementary Methods A . The slides were scanned using the MAGSCAN-NER scanner (KF-PRO-005, KFBIO) to obtain the WSI. In our study, one patient had one WSI. As tissues generally occupy a portion of the slide with large areas of white background space in a WSI, tissue segmentation should be performed first. The WSI at the 5× resolution was transformed from RGB to Lab color space and the tissue was segmented with a threshold value calculated using the OSTU algorithm. The segmented tissue image was divided into many 1024 × 1024 patches at 20 × objective magnifications (0.5 microns per pixel). The patches were adjacent to one another covering the entire WSI. From all 2624 patients, a total of 1292420 patches were extracted, as shown in Fig. . The number of patches in different WSIs varied from hundreds to more than 2000. Each WSI belonged to one of the six categories: A2, A3, A4, O2, O3, and GBM. This patient-level label was also assigned to each patch within one WSI. All classifiers in the following were trained to predict the six tumor types. We aimed to find a subset of discriminative patches from a WSI. Considering that a group of patches may share similar imaging patterns or phenotypes, we clustered the patches based on their phenotypes and distinguished the clusters with better discriminative power. The pipeline consisted of four steps: patch clustering, patch selection, patch-level classification, and patient-level classification, as shown in Fig. . Patch clustering First, the patch clustering algorithm was trained using 43653 candidate patches from 100 randomly selected patients in the training cohort, including 11 A2, 2 A3, 2 A4, 14 O2, 3 O3, and 68 GBM patients. Considering that the original image may not present type-relevant cancer phenotypes, we chose to cluster the patches in the feature domain. The patches were resized into 256 × 256 and were fed into a pre-trained CNN for deep feature extraction. Here a ResNet-50 trained with patch-level labels (six categories) on all patches in the training cohort was used as the CNN feature extractor (referred to as all-patch classifier). Using this trained ResNet-50, 2048 deep features can be extracted from the averaging pooling layer for each patch. Based on the features, the candidate 43,653 patches for the 100 patients were used to develop a K -means clustering algorithm by partitioning these patches into K clusters, where the optimal cluster number K was determined using the silhouette coefficient. The Calinski-Harabasz index was also used to additionally assess the clustering quality. The patches in different clusters were considered to have discriminative imaging patterns related to cancer types. The clustering process can be found in Supplementary Methods A . Patch selection Using the established K -means clustering algorithm, all patches from each patient in the training cohort were partitioned into K clusters. Next, K separate patch-level CNN classifiers were trained on the K patch clusters for all patients in the training cohort respectively, where the ResNet-50 was used as the CNN architecture and the training parameters were the same as used in the all-patch classifier. The K clusters obtained in the validation cohort were used to optimize the K corresponding classifiers. The K cluster-based classifiers may have different powers in classifying the tumor types. Here we used the all-patch classifier as a performance benchmark. For each patient, the clusters with better classification accuracy than the benchmark were selected for further analysis. The CNN architecture and training parameters were detailed in Supplementary Methods A . Patch-level classification Using the patches from all selected clusters, a patch-level ResNet-50 model was trained on the training cohort while optimized on the validation cohort. The same training parameters were used. This network was used to provide an estimation of the tumor types for each input image patch. Next, we should aggregate the patch-level estimations to make a final patient-level prediction. Patient-level classification The patch-level predictions were aggregated to determine the types of the entire WSI using a majority voting approach. Specifically, the class to which the maximum number of patches belonged was used as the final patient-level prediction. This aggregation approach can reduce the bias of patch-level prediction. Model selection To assess the model’s robustness and to select an optimal model, we repeated the training/validation cohort division procedure five times using five-fold cross-validation. In each repetition, the training and validation sets were divided using stratified random resampling with patient characteristics balanced between both sets. During the cross-validation process, the model was trained for a minimum of 50 epochs. Then, the loss on the validation set was computed in each epoch, where the model with the lowest average validation loss over 10 consecutive epochs was saved. If such a model was not found, the training continued up to a maximum of 150 epochs. Finally, the patient-level model with the best-averaging performance across all folds was selected as the proposed diagnostic model. First, the patch clustering algorithm was trained using 43653 candidate patches from 100 randomly selected patients in the training cohort, including 11 A2, 2 A3, 2 A4, 14 O2, 3 O3, and 68 GBM patients. Considering that the original image may not present type-relevant cancer phenotypes, we chose to cluster the patches in the feature domain. The patches were resized into 256 × 256 and were fed into a pre-trained CNN for deep feature extraction. Here a ResNet-50 trained with patch-level labels (six categories) on all patches in the training cohort was used as the CNN feature extractor (referred to as all-patch classifier). Using this trained ResNet-50, 2048 deep features can be extracted from the averaging pooling layer for each patch. Based on the features, the candidate 43,653 patches for the 100 patients were used to develop a K -means clustering algorithm by partitioning these patches into K clusters, where the optimal cluster number K was determined using the silhouette coefficient. The Calinski-Harabasz index was also used to additionally assess the clustering quality. The patches in different clusters were considered to have discriminative imaging patterns related to cancer types. The clustering process can be found in Supplementary Methods A . Using the established K -means clustering algorithm, all patches from each patient in the training cohort were partitioned into K clusters. Next, K separate patch-level CNN classifiers were trained on the K patch clusters for all patients in the training cohort respectively, where the ResNet-50 was used as the CNN architecture and the training parameters were the same as used in the all-patch classifier. The K clusters obtained in the validation cohort were used to optimize the K corresponding classifiers. The K cluster-based classifiers may have different powers in classifying the tumor types. Here we used the all-patch classifier as a performance benchmark. For each patient, the clusters with better classification accuracy than the benchmark were selected for further analysis. The CNN architecture and training parameters were detailed in Supplementary Methods A . Using the patches from all selected clusters, a patch-level ResNet-50 model was trained on the training cohort while optimized on the validation cohort. The same training parameters were used. This network was used to provide an estimation of the tumor types for each input image patch. Next, we should aggregate the patch-level estimations to make a final patient-level prediction. The patch-level predictions were aggregated to determine the types of the entire WSI using a majority voting approach. Specifically, the class to which the maximum number of patches belonged was used as the final patient-level prediction. This aggregation approach can reduce the bias of patch-level prediction. To assess the model’s robustness and to select an optimal model, we repeated the training/validation cohort division procedure five times using five-fold cross-validation. In each repetition, the training and validation sets were divided using stratified random resampling with patient characteristics balanced between both sets. During the cross-validation process, the model was trained for a minimum of 50 epochs. Then, the loss on the validation set was computed in each epoch, where the model with the lowest average validation loss over 10 consecutive epochs was saved. If such a model was not found, the training continued up to a maximum of 150 epochs. Finally, the patient-level model with the best-averaging performance across all folds was selected as the proposed diagnostic model. Statistical analysis was performed using Python (Version 3.6.1). P -value < 0.05 was considered significant. All data analysis was performed using Python 3.6.1. Specifically, the packages or software comprised PyTorch 1.10.0 for model training and testing, CUDA 11.6 and cuDNN 8.1.0.77 for GPU acceleration, and scikit-learn 1.0.2 for statistical analysis. All CNNs were trained on two NVIDIA Tesla V100 GPUs. The difference in patient characteristics between training and the other cohorts was assessed by a two-sided Wilcoxon test or Chi-square test. The patch-level classifiers were trained on the training cohort and optimized on the validation cohort. The performance of the optimal patient-level classifiers in five-fold cross-validation was further tested on the internal testing cohort and two external testing cohorts. Receiver operating characteristic (ROC) analysis was used for performance evaluation in terms of area under the ROC curve (AUC), accuracy, sensitivity, specificity, and F1-score in classifying the six categories A2, A3, A4, O2, O3, and GBM. These metrics were calculated using a one-vs.-rest approach in the multi-class problem. The average AUC over the six categories on the validation cohort for each fold was used to select the best model in cross-validation. To address the class imbalance problem, the precision-recall (PR) curves were also calculated to comprehensively assess the model performance. In addition, the performance of the clustering-based model was compared with another four models, a weakly supervised classical multiple-instance learning (MIL) model , , an attention-based MIL (AMIL) model , a clustering-constrained-attention MIL (CLAM) , and the all-patch classification model. Briefly, in MIL the patches with the highest score (that were most likely to be cancerous) were selected for diagnosis model building. AMIL and CLAM were two variants of MIL, where the former learned to emphasize the patches related to the target classes while the latter extended AMIL to a general multi-class with a refined feature space. As described before, the all-patch model used all patches for classification without patch selection. The statistical difference between AUCs was compared using DeLong analysis. Reporting of the study adhered to the STARD guideline . Further information on research design is available in the linked to this article. Supplementary Information Peer Review File Description of Additional Supplementary Files Supplementary Data 1 Supplementary Data 2 Reporting Summary Source Data |
Clinical practice guidelines for cervical cancer: an update of the Korean Society of Gynecologic Oncology Guidelines | 33430330-6cf4-42f8-8bfe-7057b950f6a0 | 11790984 | Internal Medicine[mh] | The Korean Society of Gynecologic Oncology (KSGO) announced the fifth version of its clinical practice guidelines for the management of cervical cancer in March 2024. These guidelines were developed to reflect the latest insights and address critical contemporary issues in cervical cancer care, focusing on 5 key clinical questions. Each question was explored through systematic reviews and meta-analyses, forming the basis for drafting evidence-based recommendations with clearly defined levels and grades of evidence. These drafts underwent further refinement through consultations with relevant academic societies and public hearings, culminating in the release of the final version. The selection of the key questions and the systematic reviews were based on data available up to December 2022. However, between 2023 and 2024, substantial findings from large-scale clinical trials and new advancements in cervical cancer research emerged. To incorporate these developments, the KSGO has released the Clinical Practice Guidelines for Cervical Cancer version 5.1, an updated edition that builds on the foundational work of version 5.0. The updated guidelines integrate newly published studies and reassess existing evidence to provide the most up-to-date recommendations. Among the original 5 key questions, 3 have been updated, 1 remains unchanged, and 1 has been removed due to insufficient clinical evidence. In addition, 4 new key questions have been introduced. These changes are summarized in and . For each question, recommendation was formulated with corresponding level of evidence and grade of recommendation, all established through expert consensus . 1. KQ1. Does the addition of immune checkpoint inhibitors to primary treatment (chemotherapy +/− bevacizumab) improve the survival of patients with persistent, recurrent or metastatic cervical cancer? P (population): Recurrent or metastatic cervical cancer I (intervention): Chemotherapy +/− angiogenesis inhibitor + immune checkpoint inhibitor C (comparison): Chemotherapy +/− angiogenesis inhibitor O (outcome): Survival The following recommendation was made through consensus: Adding immune checkpoint inhibitors to chemotherapy +/− bevacizumab is recommended for patients with persistent, recurrent or metastatic cervical cancer (Level of evidence: I, Grade of recommendation: A). Evidence In the KSGO Clinical Practice Guidelines for Cervical Cancer version 5.0, we provided a recommendation for this key question based on the randomized phase III study, KEYNOTE-826 . In version 5.1, we have reanalyzed this key question by incorporating the results of the BEATcc study, a phase III randomized, open-label, multicenter trial that investigated whether adding atezolizumab to the standard carboplatin, paclitaxel, and bevacizumab treatment regimen provides enhanced efficacy . Key characteristics of this study include its open-label design and the mandatory administration of bevacizumab. A total of 410 patients were randomized, and the atezolizumab group demonstrated significantly improved progression-free survival (PFS; hazard ratio [HR]=0.62; 95% confidence interval [CI]=0.49–0.78) and overall survival (OS; HR=0.68; 95% CI=0.52–0.88) compared to the control group. We performed a meta-analysis of these 2 studies, confirming that the addition of immune checkpoint inhibitors to the existing standard chemotherapy regimen significantly improved PFS (HR=0.64; 95% CI=0.55–0.74) and OS (HR=0.67; 95% CI=0.57–0.80) . However, grade 3 or higher adverse events were also increased compared to standard therapy alone (HR=1.37; 95% CI=1.02–1.85). Based on these updated results, the KSGO guideline development committee has agreed to revise the recommendation for this key question to its current form. 2. KQ2. Do immune checkpoint inhibitors improve the survival of patients with recurrent or metastatic cervical cancer in whom primary treatment has failed? P: Recurrent or metastatic cervical cancer I: Immune checkpoint inhibitor C: Conventional chemotherapy O: Survival The following recommendation was made through consensus: Immune checkpoint inhibitor monotherapy can be used for patients with recurrent or metastatic cervical cancer that has failed primary treatment (Level of evidence: I, Grade of recommendation: B). Evidence The recommendation for this key question was based on the clinical outcomes of the EMPOWER-Cervical 1/GOG-3016/ENGOT-cx9, a randomized multicenter phase III clinical trial . No new clinical research data was available for this key question. Therefore, the guideline development committee has agreed to maintain the existing recommendation for this key question. Detailed evidence supporting this recommendation has been published . 3. KQ3. Does minimally invasive radical hysterectomy result in survival outcomes similar to those of open radical hysterectomy in patients with cervical cancer? P: Cervical cancer I: Minimally invasive radical hysterectomy C: Open radical hysterectomy O: Survival The following recommendation was made through consensus: In patients with cervical cancer, minimally invasive radical hysterectomy has shown shorter disease-free survival (DFS) and OS compared to open radical hysterectomy. Therefore, open radical hysterectomy is recommended as the standard treatment. Considering the clinical environment and situation in Korea, the choice of surgical method can be made after discussing the benefits and risks of each approach with the patient (Level of evidence: I, Grade of recommendation: D). Evidence The recommendation for this key question was based on the Laparoscopic Approach to Cervical Cancer (LACC) trial . In the final survival analysis of the LACC trial published in 2024, Both DFS and OS remained significantly lower in the minimally invasive radical hysterectomy group than in the open radical hysterectomy group (DFS; HR=3.91; 95% CI=2.02–7.58; OS; HR=2.71; 95% CI=1.32–5.59) . Based on these findings, the recommendation in version 5.0 was stated as: "Consideration should be given to not performing minimally invasive radical hysterectomy in patients with cervical cancer." Subsequently, expert opinions from various fields were gathered through multiple public hearings. Considering the introduction of various efforts and surgical techniques to prevent tumor cell spillage , and the fact that the recurrence rate for minimally invasive radical hysterectomy was not lower in the subgroup of patients with prior conization in the LACC trial , many expressed the opinion that the recommendation against performing minimally invasive surgery for all cases of cervical cancer may not be appropriate. Agreeing with this opinion, the KSGO guideline development committee decided to revise the recommendation to its current form. 4. KQ4. Does adjuvant chemotherapy after chemoradiotherapy and brachytherapy improve the survival of patients with locally advanced cervical cancer? P: Locally advanced cervical cancer I: Adjuvant chemotherapy after chemoradiation C: Chemoradiation O: Survival The following recommendation was made through consensus: Consideration should be given to not administering chemotherapy after concurrent chemoradiotherapy (CCRT) for patients with locally advanced cervical cancer (Level of evidence: I, Grade of recommendation: D). Evidence In version 5.0, the intervention group for this key question included both chemotherapy and immune checkpoint inhibitors in the analysis. However, in version 5.1, chemotherapy and immune checkpoint inhibitors were separated, and the recommendation was revised accordingly. As a result, KQ4 was modified to include only adjuvant chemotherapy, and a new key question, KQ7, was added to address the addition of immune checkpoint inhibitors during and after chemoradiotherapy for locally advanced cervical cancer. The meta-analysis conducted for the revised KQ4 included 4 randomized phase 3 clinical trials . The analysis revealed that both PFS (HR=0.88; 95% CI=0.73–1.08) and OS (HR=0.93; 95% CI=0.72–1.19) did not differ significantly between the CCRT plus adjuvant chemotherapy group and the CCRT group. On the other hand, analysis of 3 studies that reported adverse events showed that the incidence of grade 3 or higher adverse events was significantly higher in the CCRT plus adjuvant chemotherapy group (HR=3.01; 95% CI=1.41–6.45). Since there was heterogeneity in the survival outcomes of the studies included in the meta-analysis, we assigned the grade of recommendation as D. 5. KQ5. Does simple hysterectomy result in recurrence rates comparable to those of radical hysterectomy in patients with early-stage, low-risk cervical cancer? P: Early-stage, low-risk cervical cancer I: Simple hysterectomy C: Radical hysterectomy O: Survival The following recommendation was made through consensus: In patients with early-stage, low-risk cervical cancer, simple hysterectomy has shown non-inferior recurrence rates compared to type II radical hysterectomy. Therefore, the choice of surgical method can be made after discussing the benefits and risks of each approach with the patient (Level of evidence: I, Grade of recommendation: B). Evidence In the phase III multicenter randomized noninferior SHAPE trial , patients who underwent simple hysterectomy for early-stage low-risk cervical cancer showed noninferiority in pelvic recurrence rate compared to the patients who underwent type II radical hysterectomy (2.52% vs. 2.17%; HR=1.12; 95% CI=0.47–2.67). No difference was observed between the 2 groups for OS (HR=1.09; 95% CI=0.38-3.14). The incidence of surgery-related adverse events within 4 weeks after surgery was lower in the simple hysterectomy group (42.6% vs. 50.6%, p=0.04). And the incidence of urinary incontinence and urinary retention within and beyond 4 weeks after surgery was significantly lower in the simple hysterectomy group than in the radical hysterectomy group. The patients included in this study had cervical squamous cell carcinoma, adenocarcinoma, or adenosquamous carcinoma, with tumor sizes ≤2 cm and invasion depths ≤10 mm. However, several factors warrant caution in interpreting the results of the SHAPE trial. Notably, 75% of the patients underwent minimally invasive surgery, about 80% had prior conization, and fewer than 50% of patients had residual disease in the hysterectomy specimen. Additionally, type III radical hysterectomy was not performed as the control intervention. Considering these factors, the guideline development committee agreed to assign a grade of recommendation as B for this recommendation. 6. KQ6. Does induction chemotherapy before chemoradiotherapy improve the survival of patients with locally advanced cervical cancer? P: Locally advanced cervical cancer I: Neoadjuvant chemotherapy prior to chemoradiation C: Chemoradiation O: Survival The following recommendation was made through consensus: Induction chemotherapy can be administered prior to chemoradiotherapy for patients with locally advanced cervical cancer (Level of evidence: I, Grade of recommendation: B). Evidence In the phase III, multicenter randomized, open-label INTERLACE trial, patients with locally advanced cervical cancer were randomly assigned to receive induction chemotherapy before definitive chemoradiotherapy or not . The induction chemotherapy regimen consisted of weekly paclitaxel 80 mg/m 2 and carboplatin area under the curve 2 for 6 weeks. Patients who received induction chemotherapy showed significantly improved PFS (HR=0.65; 95% CI=0.46–0.91) and OS (HR=0.60; 95% CI=0.40–0.91). In terms of relapse patterns, local relapse rates were similar in both groups (16%), but distant relapse was lower in the induction chemotherapy group (12%) compared to the control group (20%). However, the induction chemotherapy group had more frequent grade 3 or higher adverse events: 59% vs. 48%, and hematologic grade 3 or higher adverse events occurred in 30% vs. 13%, respectively. Induction chemotherapy with a short course of carboplatin and paclitaxel has advantages in its low cost and wide availability. However, considering that the short course carboplatin and paclitaxel regimen is not yet approved in South Korea, and that applying induction chemotherapy to all patients with locally advanced cervical cancer is not appropriate, the guideline development committee decided to assign the grade of recommendation as B. 7. KQ7. Does the addition of immune checkpoint inhibitors to chemoradiotherapy improve the survival of patients with locally advanced cervical cancer? P: Locally advanced cervical cancer I: Chemoradiation + immune checkpoint inhibitor C: Chemoradiation O: Survival The following recommendation was made through consensus: Immune checkpoint inhibitor can be added to chemoradiotherapy for patients with locally advanced cervical cancer (Level of evidence: I, Grade of recommendation: B). Evidence In the phase III, multicenter randomized, double-blind CALLA trial published in 2023, patients with locally advanced cervical cancer received either the programmed death-ligand 1 (PD-L1) inhibitor durvalumab or a placebo every 4 weeks during and after definitive chemoradiotherapy . A total of 770 patients participated in the study, and no statistically significant improvement in PFS was observed (HR=0.84; 95% CI=0.65–1.08; p=0.17). On the other hand, the phase III randomized KEYNOTE-A18 trial, which included 1,060 patients with locally advanced cervical cancer, investigated the addition of the programmed cell death protein 1 (PD-1) inhibitor pembrolizumab or placebo every 3 weeks during chemoradiotherapy followed by maintenance therapy every 6 weeks for approximately 2 years. Pembrolizumab group showed statistically significant improvements in both PFS (HR=0.68; 95% CI=0.56–0.84) and OS (HR=0.67; 95% CI=0.50–0.90) compared to the control group . A meta-analysis of these 2 studies showed that the addition of immune checkpoint inhibitors to chemoradiotherapy significantly improved both PFS (HR=0.76; 95% CI=0.64–0.91) and OS (HR=0.71; 95% CI=0.57–0.89) . There was no significant difference in the incidence of grade 3 or higher adverse events between the 2 groups (HR=1.18; 95% CI=0.92–1.51). Based on the results of these large-scale clinical trials, we developed the above recommendation through expert consensus. However, considering the heterogeneity in the results of the 2 studies included in the analysis and the potential differences in efficacy based on the mechanism of action between PD-1 and PD-L1 inhibitors, the grade of recommendation was assessed as B. 8. KQ8. Do antibody-drug conjugates improve the survival of patients with recurrent or metastatic cervical cancer in whom primary treatment has failed? P: Recurrent or metastatic cervical cancer I: Antibody-drug conjugate C: Conventional chemotherapy O: Survival The following recommendation was made through consensus: Antibody-drug conjugate tisotumab vedotin-tftv monotherapy can be used for patients with recurrent or metastatic cervical cancer that has failed primary treatment (Level of evidence: I, Grade of recommendation: B). Evidence In the phase 3, randomized, open-label InnovaTV-301 trial, patients with cervical cancer who failed platinum-based first-line treatment were compared between tisotumab vedotin-tftv, an antibody-drug conjugate, and investigator’s choice of chemotherapy . A total of 502 patients participated, and tisotumab vedotin-tftv showed statistically significant improvements in OS (HR=0.70; 95% CI=0.54–0.89) and PFS (HR=0.67; 95% CI=0.54–0.82) compared to the investigator’s choice of chemotherapy. Subgroup analysis confirmed that these survival benefits were observed regardless of prior use of immune checkpoint inhibitors. On the other hand, the incidence of any grade 3 or higher adverse events was lower in the tisotumab vedotin-tftv group (HR=0.65; 95% CI=0.46–0.94). Based on these findings, tisotumab vedotin-tftv may be a preferred treatment option over chemotherapy for patients with cervical cancer who have failed first-line treatment. However, for the same reason as KQ2, the grade of recommendation was assigned as B due to the current unavailability of tisotumab vedotin-tftv in South Korea. P (population): Recurrent or metastatic cervical cancer I (intervention): Chemotherapy +/− angiogenesis inhibitor + immune checkpoint inhibitor C (comparison): Chemotherapy +/− angiogenesis inhibitor O (outcome): Survival The following recommendation was made through consensus: Adding immune checkpoint inhibitors to chemotherapy +/− bevacizumab is recommended for patients with persistent, recurrent or metastatic cervical cancer (Level of evidence: I, Grade of recommendation: A). Evidence In the KSGO Clinical Practice Guidelines for Cervical Cancer version 5.0, we provided a recommendation for this key question based on the randomized phase III study, KEYNOTE-826 . In version 5.1, we have reanalyzed this key question by incorporating the results of the BEATcc study, a phase III randomized, open-label, multicenter trial that investigated whether adding atezolizumab to the standard carboplatin, paclitaxel, and bevacizumab treatment regimen provides enhanced efficacy . Key characteristics of this study include its open-label design and the mandatory administration of bevacizumab. A total of 410 patients were randomized, and the atezolizumab group demonstrated significantly improved progression-free survival (PFS; hazard ratio [HR]=0.62; 95% confidence interval [CI]=0.49–0.78) and overall survival (OS; HR=0.68; 95% CI=0.52–0.88) compared to the control group. We performed a meta-analysis of these 2 studies, confirming that the addition of immune checkpoint inhibitors to the existing standard chemotherapy regimen significantly improved PFS (HR=0.64; 95% CI=0.55–0.74) and OS (HR=0.67; 95% CI=0.57–0.80) . However, grade 3 or higher adverse events were also increased compared to standard therapy alone (HR=1.37; 95% CI=1.02–1.85). Based on these updated results, the KSGO guideline development committee has agreed to revise the recommendation for this key question to its current form. In the KSGO Clinical Practice Guidelines for Cervical Cancer version 5.0, we provided a recommendation for this key question based on the randomized phase III study, KEYNOTE-826 . In version 5.1, we have reanalyzed this key question by incorporating the results of the BEATcc study, a phase III randomized, open-label, multicenter trial that investigated whether adding atezolizumab to the standard carboplatin, paclitaxel, and bevacizumab treatment regimen provides enhanced efficacy . Key characteristics of this study include its open-label design and the mandatory administration of bevacizumab. A total of 410 patients were randomized, and the atezolizumab group demonstrated significantly improved progression-free survival (PFS; hazard ratio [HR]=0.62; 95% confidence interval [CI]=0.49–0.78) and overall survival (OS; HR=0.68; 95% CI=0.52–0.88) compared to the control group. We performed a meta-analysis of these 2 studies, confirming that the addition of immune checkpoint inhibitors to the existing standard chemotherapy regimen significantly improved PFS (HR=0.64; 95% CI=0.55–0.74) and OS (HR=0.67; 95% CI=0.57–0.80) . However, grade 3 or higher adverse events were also increased compared to standard therapy alone (HR=1.37; 95% CI=1.02–1.85). Based on these updated results, the KSGO guideline development committee has agreed to revise the recommendation for this key question to its current form. P: Recurrent or metastatic cervical cancer I: Immune checkpoint inhibitor C: Conventional chemotherapy O: Survival The following recommendation was made through consensus: Immune checkpoint inhibitor monotherapy can be used for patients with recurrent or metastatic cervical cancer that has failed primary treatment (Level of evidence: I, Grade of recommendation: B). Evidence The recommendation for this key question was based on the clinical outcomes of the EMPOWER-Cervical 1/GOG-3016/ENGOT-cx9, a randomized multicenter phase III clinical trial . No new clinical research data was available for this key question. Therefore, the guideline development committee has agreed to maintain the existing recommendation for this key question. Detailed evidence supporting this recommendation has been published . The recommendation for this key question was based on the clinical outcomes of the EMPOWER-Cervical 1/GOG-3016/ENGOT-cx9, a randomized multicenter phase III clinical trial . No new clinical research data was available for this key question. Therefore, the guideline development committee has agreed to maintain the existing recommendation for this key question. Detailed evidence supporting this recommendation has been published . P: Cervical cancer I: Minimally invasive radical hysterectomy C: Open radical hysterectomy O: Survival The following recommendation was made through consensus: In patients with cervical cancer, minimally invasive radical hysterectomy has shown shorter disease-free survival (DFS) and OS compared to open radical hysterectomy. Therefore, open radical hysterectomy is recommended as the standard treatment. Considering the clinical environment and situation in Korea, the choice of surgical method can be made after discussing the benefits and risks of each approach with the patient (Level of evidence: I, Grade of recommendation: D). Evidence The recommendation for this key question was based on the Laparoscopic Approach to Cervical Cancer (LACC) trial . In the final survival analysis of the LACC trial published in 2024, Both DFS and OS remained significantly lower in the minimally invasive radical hysterectomy group than in the open radical hysterectomy group (DFS; HR=3.91; 95% CI=2.02–7.58; OS; HR=2.71; 95% CI=1.32–5.59) . Based on these findings, the recommendation in version 5.0 was stated as: "Consideration should be given to not performing minimally invasive radical hysterectomy in patients with cervical cancer." Subsequently, expert opinions from various fields were gathered through multiple public hearings. Considering the introduction of various efforts and surgical techniques to prevent tumor cell spillage , and the fact that the recurrence rate for minimally invasive radical hysterectomy was not lower in the subgroup of patients with prior conization in the LACC trial , many expressed the opinion that the recommendation against performing minimally invasive surgery for all cases of cervical cancer may not be appropriate. Agreeing with this opinion, the KSGO guideline development committee decided to revise the recommendation to its current form. The recommendation for this key question was based on the Laparoscopic Approach to Cervical Cancer (LACC) trial . In the final survival analysis of the LACC trial published in 2024, Both DFS and OS remained significantly lower in the minimally invasive radical hysterectomy group than in the open radical hysterectomy group (DFS; HR=3.91; 95% CI=2.02–7.58; OS; HR=2.71; 95% CI=1.32–5.59) . Based on these findings, the recommendation in version 5.0 was stated as: "Consideration should be given to not performing minimally invasive radical hysterectomy in patients with cervical cancer." Subsequently, expert opinions from various fields were gathered through multiple public hearings. Considering the introduction of various efforts and surgical techniques to prevent tumor cell spillage , and the fact that the recurrence rate for minimally invasive radical hysterectomy was not lower in the subgroup of patients with prior conization in the LACC trial , many expressed the opinion that the recommendation against performing minimally invasive surgery for all cases of cervical cancer may not be appropriate. Agreeing with this opinion, the KSGO guideline development committee decided to revise the recommendation to its current form. P: Locally advanced cervical cancer I: Adjuvant chemotherapy after chemoradiation C: Chemoradiation O: Survival The following recommendation was made through consensus: Consideration should be given to not administering chemotherapy after concurrent chemoradiotherapy (CCRT) for patients with locally advanced cervical cancer (Level of evidence: I, Grade of recommendation: D). Evidence In version 5.0, the intervention group for this key question included both chemotherapy and immune checkpoint inhibitors in the analysis. However, in version 5.1, chemotherapy and immune checkpoint inhibitors were separated, and the recommendation was revised accordingly. As a result, KQ4 was modified to include only adjuvant chemotherapy, and a new key question, KQ7, was added to address the addition of immune checkpoint inhibitors during and after chemoradiotherapy for locally advanced cervical cancer. The meta-analysis conducted for the revised KQ4 included 4 randomized phase 3 clinical trials . The analysis revealed that both PFS (HR=0.88; 95% CI=0.73–1.08) and OS (HR=0.93; 95% CI=0.72–1.19) did not differ significantly between the CCRT plus adjuvant chemotherapy group and the CCRT group. On the other hand, analysis of 3 studies that reported adverse events showed that the incidence of grade 3 or higher adverse events was significantly higher in the CCRT plus adjuvant chemotherapy group (HR=3.01; 95% CI=1.41–6.45). Since there was heterogeneity in the survival outcomes of the studies included in the meta-analysis, we assigned the grade of recommendation as D. In version 5.0, the intervention group for this key question included both chemotherapy and immune checkpoint inhibitors in the analysis. However, in version 5.1, chemotherapy and immune checkpoint inhibitors were separated, and the recommendation was revised accordingly. As a result, KQ4 was modified to include only adjuvant chemotherapy, and a new key question, KQ7, was added to address the addition of immune checkpoint inhibitors during and after chemoradiotherapy for locally advanced cervical cancer. The meta-analysis conducted for the revised KQ4 included 4 randomized phase 3 clinical trials . The analysis revealed that both PFS (HR=0.88; 95% CI=0.73–1.08) and OS (HR=0.93; 95% CI=0.72–1.19) did not differ significantly between the CCRT plus adjuvant chemotherapy group and the CCRT group. On the other hand, analysis of 3 studies that reported adverse events showed that the incidence of grade 3 or higher adverse events was significantly higher in the CCRT plus adjuvant chemotherapy group (HR=3.01; 95% CI=1.41–6.45). Since there was heterogeneity in the survival outcomes of the studies included in the meta-analysis, we assigned the grade of recommendation as D. P: Early-stage, low-risk cervical cancer I: Simple hysterectomy C: Radical hysterectomy O: Survival The following recommendation was made through consensus: In patients with early-stage, low-risk cervical cancer, simple hysterectomy has shown non-inferior recurrence rates compared to type II radical hysterectomy. Therefore, the choice of surgical method can be made after discussing the benefits and risks of each approach with the patient (Level of evidence: I, Grade of recommendation: B). Evidence In the phase III multicenter randomized noninferior SHAPE trial , patients who underwent simple hysterectomy for early-stage low-risk cervical cancer showed noninferiority in pelvic recurrence rate compared to the patients who underwent type II radical hysterectomy (2.52% vs. 2.17%; HR=1.12; 95% CI=0.47–2.67). No difference was observed between the 2 groups for OS (HR=1.09; 95% CI=0.38-3.14). The incidence of surgery-related adverse events within 4 weeks after surgery was lower in the simple hysterectomy group (42.6% vs. 50.6%, p=0.04). And the incidence of urinary incontinence and urinary retention within and beyond 4 weeks after surgery was significantly lower in the simple hysterectomy group than in the radical hysterectomy group. The patients included in this study had cervical squamous cell carcinoma, adenocarcinoma, or adenosquamous carcinoma, with tumor sizes ≤2 cm and invasion depths ≤10 mm. However, several factors warrant caution in interpreting the results of the SHAPE trial. Notably, 75% of the patients underwent minimally invasive surgery, about 80% had prior conization, and fewer than 50% of patients had residual disease in the hysterectomy specimen. Additionally, type III radical hysterectomy was not performed as the control intervention. Considering these factors, the guideline development committee agreed to assign a grade of recommendation as B for this recommendation. In the phase III multicenter randomized noninferior SHAPE trial , patients who underwent simple hysterectomy for early-stage low-risk cervical cancer showed noninferiority in pelvic recurrence rate compared to the patients who underwent type II radical hysterectomy (2.52% vs. 2.17%; HR=1.12; 95% CI=0.47–2.67). No difference was observed between the 2 groups for OS (HR=1.09; 95% CI=0.38-3.14). The incidence of surgery-related adverse events within 4 weeks after surgery was lower in the simple hysterectomy group (42.6% vs. 50.6%, p=0.04). And the incidence of urinary incontinence and urinary retention within and beyond 4 weeks after surgery was significantly lower in the simple hysterectomy group than in the radical hysterectomy group. The patients included in this study had cervical squamous cell carcinoma, adenocarcinoma, or adenosquamous carcinoma, with tumor sizes ≤2 cm and invasion depths ≤10 mm. However, several factors warrant caution in interpreting the results of the SHAPE trial. Notably, 75% of the patients underwent minimally invasive surgery, about 80% had prior conization, and fewer than 50% of patients had residual disease in the hysterectomy specimen. Additionally, type III radical hysterectomy was not performed as the control intervention. Considering these factors, the guideline development committee agreed to assign a grade of recommendation as B for this recommendation. P: Locally advanced cervical cancer I: Neoadjuvant chemotherapy prior to chemoradiation C: Chemoradiation O: Survival The following recommendation was made through consensus: Induction chemotherapy can be administered prior to chemoradiotherapy for patients with locally advanced cervical cancer (Level of evidence: I, Grade of recommendation: B). Evidence In the phase III, multicenter randomized, open-label INTERLACE trial, patients with locally advanced cervical cancer were randomly assigned to receive induction chemotherapy before definitive chemoradiotherapy or not . The induction chemotherapy regimen consisted of weekly paclitaxel 80 mg/m 2 and carboplatin area under the curve 2 for 6 weeks. Patients who received induction chemotherapy showed significantly improved PFS (HR=0.65; 95% CI=0.46–0.91) and OS (HR=0.60; 95% CI=0.40–0.91). In terms of relapse patterns, local relapse rates were similar in both groups (16%), but distant relapse was lower in the induction chemotherapy group (12%) compared to the control group (20%). However, the induction chemotherapy group had more frequent grade 3 or higher adverse events: 59% vs. 48%, and hematologic grade 3 or higher adverse events occurred in 30% vs. 13%, respectively. Induction chemotherapy with a short course of carboplatin and paclitaxel has advantages in its low cost and wide availability. However, considering that the short course carboplatin and paclitaxel regimen is not yet approved in South Korea, and that applying induction chemotherapy to all patients with locally advanced cervical cancer is not appropriate, the guideline development committee decided to assign the grade of recommendation as B. In the phase III, multicenter randomized, open-label INTERLACE trial, patients with locally advanced cervical cancer were randomly assigned to receive induction chemotherapy before definitive chemoradiotherapy or not . The induction chemotherapy regimen consisted of weekly paclitaxel 80 mg/m 2 and carboplatin area under the curve 2 for 6 weeks. Patients who received induction chemotherapy showed significantly improved PFS (HR=0.65; 95% CI=0.46–0.91) and OS (HR=0.60; 95% CI=0.40–0.91). In terms of relapse patterns, local relapse rates were similar in both groups (16%), but distant relapse was lower in the induction chemotherapy group (12%) compared to the control group (20%). However, the induction chemotherapy group had more frequent grade 3 or higher adverse events: 59% vs. 48%, and hematologic grade 3 or higher adverse events occurred in 30% vs. 13%, respectively. Induction chemotherapy with a short course of carboplatin and paclitaxel has advantages in its low cost and wide availability. However, considering that the short course carboplatin and paclitaxel regimen is not yet approved in South Korea, and that applying induction chemotherapy to all patients with locally advanced cervical cancer is not appropriate, the guideline development committee decided to assign the grade of recommendation as B. P: Locally advanced cervical cancer I: Chemoradiation + immune checkpoint inhibitor C: Chemoradiation O: Survival The following recommendation was made through consensus: Immune checkpoint inhibitor can be added to chemoradiotherapy for patients with locally advanced cervical cancer (Level of evidence: I, Grade of recommendation: B). Evidence In the phase III, multicenter randomized, double-blind CALLA trial published in 2023, patients with locally advanced cervical cancer received either the programmed death-ligand 1 (PD-L1) inhibitor durvalumab or a placebo every 4 weeks during and after definitive chemoradiotherapy . A total of 770 patients participated in the study, and no statistically significant improvement in PFS was observed (HR=0.84; 95% CI=0.65–1.08; p=0.17). On the other hand, the phase III randomized KEYNOTE-A18 trial, which included 1,060 patients with locally advanced cervical cancer, investigated the addition of the programmed cell death protein 1 (PD-1) inhibitor pembrolizumab or placebo every 3 weeks during chemoradiotherapy followed by maintenance therapy every 6 weeks for approximately 2 years. Pembrolizumab group showed statistically significant improvements in both PFS (HR=0.68; 95% CI=0.56–0.84) and OS (HR=0.67; 95% CI=0.50–0.90) compared to the control group . A meta-analysis of these 2 studies showed that the addition of immune checkpoint inhibitors to chemoradiotherapy significantly improved both PFS (HR=0.76; 95% CI=0.64–0.91) and OS (HR=0.71; 95% CI=0.57–0.89) . There was no significant difference in the incidence of grade 3 or higher adverse events between the 2 groups (HR=1.18; 95% CI=0.92–1.51). Based on the results of these large-scale clinical trials, we developed the above recommendation through expert consensus. However, considering the heterogeneity in the results of the 2 studies included in the analysis and the potential differences in efficacy based on the mechanism of action between PD-1 and PD-L1 inhibitors, the grade of recommendation was assessed as B. In the phase III, multicenter randomized, double-blind CALLA trial published in 2023, patients with locally advanced cervical cancer received either the programmed death-ligand 1 (PD-L1) inhibitor durvalumab or a placebo every 4 weeks during and after definitive chemoradiotherapy . A total of 770 patients participated in the study, and no statistically significant improvement in PFS was observed (HR=0.84; 95% CI=0.65–1.08; p=0.17). On the other hand, the phase III randomized KEYNOTE-A18 trial, which included 1,060 patients with locally advanced cervical cancer, investigated the addition of the programmed cell death protein 1 (PD-1) inhibitor pembrolizumab or placebo every 3 weeks during chemoradiotherapy followed by maintenance therapy every 6 weeks for approximately 2 years. Pembrolizumab group showed statistically significant improvements in both PFS (HR=0.68; 95% CI=0.56–0.84) and OS (HR=0.67; 95% CI=0.50–0.90) compared to the control group . A meta-analysis of these 2 studies showed that the addition of immune checkpoint inhibitors to chemoradiotherapy significantly improved both PFS (HR=0.76; 95% CI=0.64–0.91) and OS (HR=0.71; 95% CI=0.57–0.89) . There was no significant difference in the incidence of grade 3 or higher adverse events between the 2 groups (HR=1.18; 95% CI=0.92–1.51). Based on the results of these large-scale clinical trials, we developed the above recommendation through expert consensus. However, considering the heterogeneity in the results of the 2 studies included in the analysis and the potential differences in efficacy based on the mechanism of action between PD-1 and PD-L1 inhibitors, the grade of recommendation was assessed as B. P: Recurrent or metastatic cervical cancer I: Antibody-drug conjugate C: Conventional chemotherapy O: Survival The following recommendation was made through consensus: Antibody-drug conjugate tisotumab vedotin-tftv monotherapy can be used for patients with recurrent or metastatic cervical cancer that has failed primary treatment (Level of evidence: I, Grade of recommendation: B). Evidence In the phase 3, randomized, open-label InnovaTV-301 trial, patients with cervical cancer who failed platinum-based first-line treatment were compared between tisotumab vedotin-tftv, an antibody-drug conjugate, and investigator’s choice of chemotherapy . A total of 502 patients participated, and tisotumab vedotin-tftv showed statistically significant improvements in OS (HR=0.70; 95% CI=0.54–0.89) and PFS (HR=0.67; 95% CI=0.54–0.82) compared to the investigator’s choice of chemotherapy. Subgroup analysis confirmed that these survival benefits were observed regardless of prior use of immune checkpoint inhibitors. On the other hand, the incidence of any grade 3 or higher adverse events was lower in the tisotumab vedotin-tftv group (HR=0.65; 95% CI=0.46–0.94). Based on these findings, tisotumab vedotin-tftv may be a preferred treatment option over chemotherapy for patients with cervical cancer who have failed first-line treatment. However, for the same reason as KQ2, the grade of recommendation was assigned as B due to the current unavailability of tisotumab vedotin-tftv in South Korea. In the phase 3, randomized, open-label InnovaTV-301 trial, patients with cervical cancer who failed platinum-based first-line treatment were compared between tisotumab vedotin-tftv, an antibody-drug conjugate, and investigator’s choice of chemotherapy . A total of 502 patients participated, and tisotumab vedotin-tftv showed statistically significant improvements in OS (HR=0.70; 95% CI=0.54–0.89) and PFS (HR=0.67; 95% CI=0.54–0.82) compared to the investigator’s choice of chemotherapy. Subgroup analysis confirmed that these survival benefits were observed regardless of prior use of immune checkpoint inhibitors. On the other hand, the incidence of any grade 3 or higher adverse events was lower in the tisotumab vedotin-tftv group (HR=0.65; 95% CI=0.46–0.94). Based on these findings, tisotumab vedotin-tftv may be a preferred treatment option over chemotherapy for patients with cervical cancer who have failed first-line treatment. However, for the same reason as KQ2, the grade of recommendation was assigned as B due to the current unavailability of tisotumab vedotin-tftv in South Korea. |
Outcomes before and after providing interdisciplinary hematology and pulmonary care for children with sickle cell disease | 4c8700a6-ff2c-4fd9-8890-74ace64c0441 | 10205588 | Internal Medicine[mh] | Sickle cell disease (SCD) is a genetic and chronic disease that primarily affects Black and Hispanic populations in the United States. It causes significant morbidity, including acute and chronic pain, decreased quality of life, end-organ damage, and reduces the average lifespan by 20 to 30 years compared with those without SCD. , , There are several well-documented factors that contribute to the morbidity and mortality in people with SCD (pwSCD), and acute and chronic pulmonary conditions are the most common ones. For example, asthma is seen in ∼12% of all children in the United States, but it is estimated that 20% to 25% of children with SCD have concomitant asthma. However, clear definitions of asthma in SCD versus wheezing from other chronic mechanisms, such as hemolysis-induced inflammation, are not well identified and typically require pulmonary subspecialty evaluation and management. People with asthma and SCD have an increased risk of all-cause mortality compared with those without asthma. This is likely because of their increased risk of developing acute vaso-occlusive pain episodes (VOEs), stroke, acute chest syndrome (ACS), and the increased need for blood transfusions compared with those with SCD alone. , ACS itself is also a significant contributor to SCD morbidity and mortality. ACS is the second most common cause of hospitalization and the most common cause of death, with 25% of pwSCD succumbing to this complication. Recurrent ACS episodes lead to an increased risk of irreversible lung damage that can manifest as either a restrictive or an obstructive chronic lung disease pattern. In addition, obstructive sleep apnea (OSA) is another common pulmonary condition that can complicate SCD. Whereas 1% to 5% of children are diagnosed with OSA, the prevalence of OSA and other sleep disorders remains poorly defined, with past studies documenting that OSA affects between 5% and 59% of pwSCD. , Sleep disorders are particularly problematic in pwSCD as airway obstruction during sleep leads to oxygen desaturations that can increase erythrocyte sickling and subsequent pathology, including cardiac dysfunction and pulmonary hypertension. , Ensuring that pwSCD and coexistent pulmonary disease receive hematology and pulmonary preventive care has the potential to reduce SCD complications. However, it has been well-documented that pwSCD and their families report facing more barriers to accessing health care than other Black children without SCD, even when other demographic variables are controlled. They report increased difficulty attending appointments, arranging transportation, and waiting longer to see their providers compared with the general population. These issues may be related to poverty because many pwSCD live below the federal poverty line, which can limit the ability to attend appointments and/or be insured. Families of pwSCD also report a lack of communication between different parts of the health care system that can make accessing and navigating the health care system particularly challenging. Those with comorbid asthma report facing an even larger number of barriers to care, receiving more discordant care between their medical providers, and feeling even more marginalized than those with SCD alone. Families of pwSCD report having fewer opportunities to access quality comprehensive care than families of children with other chronic conditions and special health care needs and report feeling that this leads to an increased use of emergency department and increased hospitalizations. To mitigate access barriers, some have advocated for grouping health care visits together on the same day to reduce the transportation and time burden that appointments put on families. Previous studies suggest that implementing a multidisciplinary care clinic is associated with a significant decrease in acute care usage among those with SCD who have a history of high acute care usage, and an interdisciplinary SCD and pulmonary clinic may improve appointment adherence. To this end, in 2014, the Nationwide Children's Hospital (NCH) created an interdisciplinary clinic that provides pulmoray care for pwSCD. , The clinic was offered to pwSCD at the NCH with a history of at least 1 pulmonary complication, such as asthma, OSA, hypoxia, and/or recurrent ACS. To evaluate this model of care, we aim to compare the outcomes of pwSCD during the 2 years before their initial SCD-pulmonary visit with the 2 years after this visit. We hypothesize that pwSCD would have fewer hospitalizations for ACS, asthma, and VOEs in the 2 years after their initial SCD-pulmonary clinic visit.
Description of the interdisciplinary SCD-pulmonary comprehensive clinic The SCD-pulmonary interdisciplinary team at the NCH included 2 pediatric hematologists, 2 pediatric pulmonologists, a respiratory therapist with portable pulmonary function testing (PFT) equipment, 3 SCD nurse practitioners, 2 SCD nurse clinicians, a social worker, a psychologist, a genetic counselor, and a school liaison. This bimonthly clinic is located on the main NCH campus and includes all the testing and counseling that are provided in the standard SCD comprehensive clinic, with additional evaluation by 1 of the pediatric pulmonologists, respiratory therapist teaching, and specific pulmonary testing including PFT, pulse oximetry, sleep and tobacco smoke exposure screening, and an option for plethysmography, polysomnography (PSG), and 6-minute walk test as clinically indicated. PwSCD were followed up biannually in this clinic if they had ongoing pulmonary needs or were transitioned back to the standard comprehensive SCD clinic after managing the pulmonary problem. All PFTs and plethysmography were performed in accordance with the third National Health and Nutrition Examination Survey (NHANES III) standards and the American Thoracic Society guidelines. Study design and population We conducted an institutional review board–approved retrospective chart review of all pwSCD who visited the NCH SCD-pulmonary interdisciplinary clinic between 13 February 2014 (initial SCD-pulmonary clinic) and 10 December 2020 (last SCD-pulmonary clinic of 2020). This study was conducted in accordance with the Declaration of Helsinki. The NCH SCD database and search function of the electronic medical record (EMR) were used to manually identify pwSCD who had been seen for their initial SCD-pulmonary clinic visit during the study period. From here, EMRs were examined to identify those who were followed up at the NCH for at least 2 years before and 2 years after their initial SCD-pulmonary visit. PwSCD were excluded from the study if they underwent stem cell transplant during the study period. Data collection Demographics (eg, age, gender, SCD genotype) and insurance type (eg, private, public) at the time of their initial SCD-pulmonary visit were recorded. Hospitalizations for VOE and/or ACS that occurred during the 4 years that an individual was followed were identified using the international classification of disease 10th revision codes (ICD10) listed within the EMR. PFTs, plethysmography, PSGs, and echocardiograms (ECHO) that were ordered and obtained during the 4-year period were also reviewed. PFT parameters included forced vital capacity (FVC) and forced expiratory volume in 1 second (FEV 1 ). PSG parameters included the presence of OSA, apnea-hypopnea index (AHI), sleep efficiency, total rapid eye movement (REM) sleep, oxygen saturations, and the presence of hypoventilation. OSA was defined as an apnea/hypopnea index (AHI) of >5 during PSG. ECHO parameters included the presence of left ventricular hypertrophy (LVH), left atrial hypertrophy (LAH), diastolic or systolic dysfunction, and tricuspid regurgitant jet velocity (TRJV). A TRJV ≥2.5 m/s was considered abnormal. EMR data were obtained using automated data pull and manual collection. Automated data abstraction was used to collect demographics, clinical and acute care visits, intensive care unit (ICU) admissions, PFTs, medication prescribing, and diagnoses. For each of these automated variables, 10 charts were manually reviewed to verify that the automated data that were pulled were accurate. ECHO and PSG data were manually abstracted from studies that had been completed at any point during the 4-year study period, whereas prescription of SCD-modifying medication (eg, hydroxyurea), inhaled corticosteroid (ICS), or systemic corticosteroid, laboratory studies (as long as not transfused within the previous 3 months), PFT results, self-reported tobacco smoke exposure, and the ICD10 codes for diagnoses of asthma and/or allergic rhinitis were collected, if available, from SCD visits that occurred ∼2 years before their initial SCD-pulmonary visit and from the initial SCD-pulmonary visit and the SCD-pulmonary visit that occurred ∼2 years later. Statistical analysis Data were summarized with standard descriptive statistics: frequency and percentage for categorical variables, and median and interquartile range (IQR) for quantitative variables. Wilcoxon signed rank tests were used to compare quantitative variables from before with those after the initial SCD-pulmonary visit, such as the number of acute visits, number of systemic steroid courses, and laboratory values. McNemar test was used to compare categorical data between the 2 time points. P values were 2-sided and P < .05 were considered statistically significant. All statistical analyses were completed using SAS software, version 9.4 (SAS Institute, Cary, NC) or the base R package (R Foundation for Statistical Computing, Vienna, Austria).
The SCD-pulmonary interdisciplinary team at the NCH included 2 pediatric hematologists, 2 pediatric pulmonologists, a respiratory therapist with portable pulmonary function testing (PFT) equipment, 3 SCD nurse practitioners, 2 SCD nurse clinicians, a social worker, a psychologist, a genetic counselor, and a school liaison. This bimonthly clinic is located on the main NCH campus and includes all the testing and counseling that are provided in the standard SCD comprehensive clinic, with additional evaluation by 1 of the pediatric pulmonologists, respiratory therapist teaching, and specific pulmonary testing including PFT, pulse oximetry, sleep and tobacco smoke exposure screening, and an option for plethysmography, polysomnography (PSG), and 6-minute walk test as clinically indicated. PwSCD were followed up biannually in this clinic if they had ongoing pulmonary needs or were transitioned back to the standard comprehensive SCD clinic after managing the pulmonary problem. All PFTs and plethysmography were performed in accordance with the third National Health and Nutrition Examination Survey (NHANES III) standards and the American Thoracic Society guidelines.
We conducted an institutional review board–approved retrospective chart review of all pwSCD who visited the NCH SCD-pulmonary interdisciplinary clinic between 13 February 2014 (initial SCD-pulmonary clinic) and 10 December 2020 (last SCD-pulmonary clinic of 2020). This study was conducted in accordance with the Declaration of Helsinki. The NCH SCD database and search function of the electronic medical record (EMR) were used to manually identify pwSCD who had been seen for their initial SCD-pulmonary clinic visit during the study period. From here, EMRs were examined to identify those who were followed up at the NCH for at least 2 years before and 2 years after their initial SCD-pulmonary visit. PwSCD were excluded from the study if they underwent stem cell transplant during the study period.
Demographics (eg, age, gender, SCD genotype) and insurance type (eg, private, public) at the time of their initial SCD-pulmonary visit were recorded. Hospitalizations for VOE and/or ACS that occurred during the 4 years that an individual was followed were identified using the international classification of disease 10th revision codes (ICD10) listed within the EMR. PFTs, plethysmography, PSGs, and echocardiograms (ECHO) that were ordered and obtained during the 4-year period were also reviewed. PFT parameters included forced vital capacity (FVC) and forced expiratory volume in 1 second (FEV 1 ). PSG parameters included the presence of OSA, apnea-hypopnea index (AHI), sleep efficiency, total rapid eye movement (REM) sleep, oxygen saturations, and the presence of hypoventilation. OSA was defined as an apnea/hypopnea index (AHI) of >5 during PSG. ECHO parameters included the presence of left ventricular hypertrophy (LVH), left atrial hypertrophy (LAH), diastolic or systolic dysfunction, and tricuspid regurgitant jet velocity (TRJV). A TRJV ≥2.5 m/s was considered abnormal. EMR data were obtained using automated data pull and manual collection. Automated data abstraction was used to collect demographics, clinical and acute care visits, intensive care unit (ICU) admissions, PFTs, medication prescribing, and diagnoses. For each of these automated variables, 10 charts were manually reviewed to verify that the automated data that were pulled were accurate. ECHO and PSG data were manually abstracted from studies that had been completed at any point during the 4-year study period, whereas prescription of SCD-modifying medication (eg, hydroxyurea), inhaled corticosteroid (ICS), or systemic corticosteroid, laboratory studies (as long as not transfused within the previous 3 months), PFT results, self-reported tobacco smoke exposure, and the ICD10 codes for diagnoses of asthma and/or allergic rhinitis were collected, if available, from SCD visits that occurred ∼2 years before their initial SCD-pulmonary visit and from the initial SCD-pulmonary visit and the SCD-pulmonary visit that occurred ∼2 years later.
Data were summarized with standard descriptive statistics: frequency and percentage for categorical variables, and median and interquartile range (IQR) for quantitative variables. Wilcoxon signed rank tests were used to compare quantitative variables from before with those after the initial SCD-pulmonary visit, such as the number of acute visits, number of systemic steroid courses, and laboratory values. McNemar test was used to compare categorical data between the 2 time points. P values were 2-sided and P < .05 were considered statistically significant. All statistical analyses were completed using SAS software, version 9.4 (SAS Institute, Cary, NC) or the base R package (R Foundation for Statistical Computing, Vienna, Austria).
Participants Of the 513 pwSCD followed at NCH during the study period, 145 had at least 1 SCD-pulmonary visit, and 119 were followed for at least 2 years before and 2 years after their initial SCD-pulmonary visit and included in the analyses . Of the 119 pwSCD who were followed longitudinally, 77 (65%) were evaluated at a separate pulmonary clinic visit before their initial SCD-pulmonary care visit, but 58% had at least 1 instance of clinic nonattendance when they were scheduled for an appointment in the separate pulmonary clinic. In contrast, in the 2 years after their initial SCD-pulmonary clinic visit, pwSCD attended a median of 3 (IQR, 2-3) additional SCD-pulmonary visits, and only 19% had a least 1 instance of SCD-pulmonary clinic nonattendance ( P < .001). Acute care utilization The median number of acute care visits by this cohort for ACS ( P < .001) and asthma ( P = .006) were significantly lower, and the number of unique pwSCD who had at least 1 visit for ACS or for asthma were significantly lower during the 2 years after they were examined the SCD-pulmonary clinic than during the 2 years before that (ACS, 66% vs 34%; P < .001; asthma, 24% vs 12%; P = .014) ( B-C). There were no statistically significant differences in the median number of hospital admissions for VOE nor for receipt of ICU care ( A-D). Diagnoses and prescription of medication Diagnoses of asthma and allergic rhinitis were more frequently observed after SCD-pulmonary clinic evaluation, along with an increase in prescriptions issued for hydroxyurea therapy and ICS . There was a significant reduction in the number of pwSCD for whom systemic corticosteroids were prescribed (36% before vs 18% after, P < .001) and in the overall number of systemic corticosteroid courses that were prescribed in the 2 years after visiting the SCD-pulmonary clinic ( P < .001) . Diagnostic testing findings Of the 86 pwSCD who had PFTs completed at their initial SCD-pulmonary visit, 29 showed either an obstructed or restrictive pattern, with no significant increase in observed abnormal patterns over time ( P = .25). In addition, 47% had improvement in post-clinic FEV 1 , with stable longitudinal trends in FVC and FEV 1 . Of the 62 participants who had bronchodilator testing, 27 (44%) had response. ECHO findings demonstrated a high frequency of LVH and LAH in pwSCD who had these studies completed before and after their initial SCD-pulmonary clinic evaluation . Thirty-six individuals completed a PSG before attending the SCD-pulmonary clinic and 36 completed a PSG after attending the clinic (n = 15 matched studies). Sleep abnormalities were common during both study observation periods, with reduced REM sleep, reduced sleep efficiency, hypoxia, OSA, and prolonged sleep onset latency observed . Ninety-four pwSCD had laboratory values that were obtained 2 years before and 2 years after their initial SCD-pulmonary clinic visit .
Of the 513 pwSCD followed at NCH during the study period, 145 had at least 1 SCD-pulmonary visit, and 119 were followed for at least 2 years before and 2 years after their initial SCD-pulmonary visit and included in the analyses . Of the 119 pwSCD who were followed longitudinally, 77 (65%) were evaluated at a separate pulmonary clinic visit before their initial SCD-pulmonary care visit, but 58% had at least 1 instance of clinic nonattendance when they were scheduled for an appointment in the separate pulmonary clinic. In contrast, in the 2 years after their initial SCD-pulmonary clinic visit, pwSCD attended a median of 3 (IQR, 2-3) additional SCD-pulmonary visits, and only 19% had a least 1 instance of SCD-pulmonary clinic nonattendance ( P < .001).
The median number of acute care visits by this cohort for ACS ( P < .001) and asthma ( P = .006) were significantly lower, and the number of unique pwSCD who had at least 1 visit for ACS or for asthma were significantly lower during the 2 years after they were examined the SCD-pulmonary clinic than during the 2 years before that (ACS, 66% vs 34%; P < .001; asthma, 24% vs 12%; P = .014) ( B-C). There were no statistically significant differences in the median number of hospital admissions for VOE nor for receipt of ICU care ( A-D).
Diagnoses of asthma and allergic rhinitis were more frequently observed after SCD-pulmonary clinic evaluation, along with an increase in prescriptions issued for hydroxyurea therapy and ICS . There was a significant reduction in the number of pwSCD for whom systemic corticosteroids were prescribed (36% before vs 18% after, P < .001) and in the overall number of systemic corticosteroid courses that were prescribed in the 2 years after visiting the SCD-pulmonary clinic ( P < .001) .
Of the 86 pwSCD who had PFTs completed at their initial SCD-pulmonary visit, 29 showed either an obstructed or restrictive pattern, with no significant increase in observed abnormal patterns over time ( P = .25). In addition, 47% had improvement in post-clinic FEV 1 , with stable longitudinal trends in FVC and FEV 1 . Of the 62 participants who had bronchodilator testing, 27 (44%) had response. ECHO findings demonstrated a high frequency of LVH and LAH in pwSCD who had these studies completed before and after their initial SCD-pulmonary clinic evaluation . Thirty-six individuals completed a PSG before attending the SCD-pulmonary clinic and 36 completed a PSG after attending the clinic (n = 15 matched studies). Sleep abnormalities were common during both study observation periods, with reduced REM sleep, reduced sleep efficiency, hypoxia, OSA, and prolonged sleep onset latency observed . Ninety-four pwSCD had laboratory values that were obtained 2 years before and 2 years after their initial SCD-pulmonary clinic visit .
Despite the well-established burden of pulmonary disease and difficulties with access to care in pwSCD, optimal treatment paradigms are not yet well established. Studies have demonstrated that pwSCD and their families often have health care access barriers. The combined SCD-pulmonary clinic at the NCH was established with the intent of easing health care access barriers by allowing children with SCD to be examined for their underlying pulmonary complications at the same time that they were receiving their hematology care. We observed that receiving care in this clinic was associated with a variety of improved outcomes, including reduced acute health care visits for ACS and asthma and prescriptions for systemic corticosteroid courses as well as an increased recognition of underlying asthma and prescribing of hydroxyurea and ICS. Future studies are warranted to understand the mechanisms that drive these improvements because our study design does not allow definitive conclusions about this model of care and whether it improves outcomes. Furthermore, the frequency of combined assessment should be studied to determine whether combined assessments, performed more frequently than biannual assessments, affect outcomes or, conversely, contribute to care burden. Our findings do suggest, however, that this innovative care model is feasible and may be a strategy to reduce access barriers, allow for timely communication between subspecialty providers, and facilitate optimized care among children with SCD who are at high risk. Notably, we observed a significant reduction in the number of hospitalizations for ACS and asthma and in the prescription of systemic corticosteroid courses in the 2 years after the initial SCD-pulmonary clinic visit compared with the 2 years before the visit. These findings are consistent with a recent study on pulmonology involvement in a nonintegrated clinic and support the notion that improved preventive care reduces usage of acute health care. , For example, attaining better control of asthma or inflammatory airway disease from improved ICS use may lead to reduced asthma and ACS admissions and associated prescription of systemic corticosteroid. Limiting systemic corticosteroids is particularly important for pwSCD because these medications are known to increase the risk of subsequent hospitalization for VOE. Frequency of VOE was not significantly different before and after the initial SCD-pulmonary clinic visit. This could be related to the fact that frequency of VOEs increases with age. Our study period may have also been too short to detect an impact of other factors, beyond pulmonary care, that may influence the frequency of VOE, because VOEs have many possible triggers. We also observed improved hematologic parameters including a significantly lower white blood cells and lactate dehydrogenase., indicative of less hemolysis and release of inflammatory cell byproducts. Although fetal hemoglobin significantly declined, and creatinine and mean corpuscular volume increased, we suspect that these changes were related to the increase in the age of our cohort. Similar to what was observed in a prior study evaluating an integrated clinic, we saw a reduction in clinic nonattendance. By combining pulmonology care with routine hematology care, pwSCD were able to make fewer trips to receive necessary care. This has potential to considerably reduce the amount of time and resources needed to receive high quality care. The collaborative clinic also allowed for easy communication between pulmonary and hematology providers, which may reduce discordance in care. Finally, we observed that prescription of medication for pwSCD and asthma increased after their initial SCD-pulmonary visit. The increase in prescription of ICS was predicted because asthma diagnoses increased. Increased prescription of hydroxyurea, however, might suggest that management by an interdisciplinary team that emphasizes the use of this important disease-modifying therapy could be a strategy to optimize its use. Our study has a few limitations. Because of its retrospective nature, we were unable to determine causality, and future research is needed to dissect which components of this interdisciplinary clinic may be key to improving outcomes and care. Also, although prescription of ICS and hydroxyurea increased, given our data source, we were unable to reliably evaluate whether medication nonadherence may have limited the impact of these therapies on outcomes. In addition, many of the diagnostic testing variables, such as mean corpuscular volume, creatinine, FEV 1 , and FVC increase with age, making it challenging to determine whether the interdisciplinary clinic was associated with improvements in these parameters. Finally, our relatively small sample size and limited paired PSG and ECHO data may have hindered our ability to determine the full impact of this model of care. Future multicenter prospective studies are needed to better determine the impact of an interdisciplinary care model on long-term changes in cardiopulmonary and sleep variables. These studies could also elucidate the characteristics of patients who are at high risk of adverse outcomes and could receive the largest potential benefit from interdisciplinary care. In conclusion, introducing a multidisciplinary SCD-pulmonary clinic may allow improved management of common pulmonary problems observed in pwSCD and may lead to improvements in overall health and acute care utilization. Additional studies are warranted to test whether this care model is sustainable, scalable, and definitively improves outcomes for pwSCD.
|
Adverse childhood events and self-harming behaviours among individuals in Ontario forensic system: the mediating role of psychopathy | e1b845cd-7f0f-443d-8419-73250fbfb4ae | 11064378 | Forensic Medicine[mh] | The criminal justice system has consistently had a large representation of individuals with psychopathy and those who experience adverse childhood events (ACE) . ACEs are traumatic events (e.g., abuse, neglect, household dysfunction, and exposure to violence) that occur before age 18 and can negatively affect physical and mental health . In general, ACEs are well-known to predict a wide range of negative outcomes, such as violence, certain personality disorders, and criminogenic behaviours . Previous research has reported similar prevalence rates for ACEs across correctional and forensic psychiatric populations; it has identified analogous, similar, and unique features of ACEs and their impacts on the two population groups . In Canada, forensic psychiatric patients are individuals who have committed a criminal offense and are found not criminally responsible (NCR) or unfit to stand trial due to a mental disorder . Compared to the general population, forensic patients have higher rates of ACEs, self-harm, as well as psychopathy - a condition characterized by a lack of empathy, remorse, and guilt, as well as impulsivity, antisocial behaviour, and manipulation . A concise overview of relevant themes from the literature is provided below to serve as a broad background for the empirical study reported in this paper. ACEs and self-harming behaviours ACEs can have a profound negative impact on an individual, particularly those involved in the criminal justice system. Evidence from the literature on forensic psychiatric patients showed that ACEs consistently predicted self-harming behaviour . The greater the number of ACEs, the more likely an individual would engage in self-harming conduct during adulthood . Moreover, some studies have highlighted the importance of the various forms of ACEs (e.g., parent substance use, having a household member(s) with a mental illness, physical abuse, emotional abuse, and history of bullying) to the risk of self-harming behaviour . For example, emotional and sexual abuse were the most common ACEs associated with future self-harming behaviour among incarcerated females . These findings highlight the variability in the detrimental effects that different types of ACEs can have on an individual’s self-harming behaviours based on the nature and severity of ACEs and the personal factors of the victims. ACEs and psychopathy Research has shown that specific ACEs, such as physical abuse during childhood, are significant predictors of psychopathic traits, primarily in individuals involved in the criminal justice system . Closely linked is that forensic samples that present with psychopathic traits tend to have high incidences of ACEs , and the severity of the ACE (e.g., more severe childhood physical abuse) was positively associated with more severe psychopathic traits, specifically within the male forensic population . Psychopathy and self-harming behaviours The relationship between psychopathy and self-harm behaviour is complex, and several studies have noted that self-harm shares a bifurcated relationship with factors 1 and 2 of the two-factor model of psychopathy . Factor 2 (captured by items that elicited antisocial behaviours: criminal versatility, impulsiveness, irresponsibility, poor behaviour controls, and juvenile delinquency) of the Psychopathy Checklist-Revised (PCL-R) was significantly associated with engaging in self-harming behaviours compared to Factor 1 (affective-interpersonal deficits) . Similarly, self-harming behaviour was positively related to specific characteristics of psychopathy, such as high impulsivity and sensation-seeking in the forensic population . Similar findings in previous reports in non-clinical samples (e.g., undergraduate students) have demonstrated an association between Factor 2 and suicidal behaviour . This is most likely due to the high loading of impulsivity and antisocial tendencies in Factor 2 . Relationship between psychopathy, ACEs, and self-harming behaviours Individuals with severe mental illness (such as those in the forensic psychiatric settings) are more prone to engage in self-harming behaviours . ACEs have been implicated as one of the plausible explanatory factors for self-harming behaviours . Previous studies among forensic populations have demonstrated an increased likelihood of engaging in self-harming behaviours in individuals with a history of exposure to ACEs or those with psychopathic traits . Taking together, it is tenable to suggest that exposure to ACEs can lead to psychopathic traits, which in turn can heavily influence the prevalence of self-harming behaviour. Therefore, there is a need to explore the inter-relatedness of ACEs, psychopathy, and self-harming behaviours in the forensic population. Mediating effects of psychopathy on the relationship between ACEs and self-harming behaviours While previous studies have established a link between ACEs and self-harming behaviours , the contribution and interplay of identifiable putative factors on this relationship is yet unclear. Some theories have indicated that psychopathy (or PCL-R scores) can mediate the relationship between ACEs and self-harming behaviour . One potential reason for this relationship is that when someone experiences multiple ACEs, they may develop psychopathic traits such as impulsive behaviour and a lack of emotional regulation to help cope with their situation and previous stressful circumstances or adverse experiences . In turn, impulsive behaviours and antisocial tendencies are positively associated with self-harming behaviours, indicating the mediating effect of psychopathy or PCL-R scores on the risk of self-harm among individuals exposed to ACEs. The present study Self-harm is a significant public health issue that can lead to severe complications, including suicide, infection, psychosocial impairment, and disability . Understanding the factors associated with self-harming behaviours is a significant step toward mitigating the risks, especially among at-risk populations (e.g., individuals in the forensic system). Among forensic patients, previous studies have shown a linkage between ACEs and an increased risk of self-harming behaviours, such as cutting, burning, or hitting oneself . Closely related is that psychopathy may influence the relationship between ACEs and self-harm by affecting emotional regulation, coping skills, and motivation for self-injury in the affected individuals . However, there is scant research on the mediating effects of psychopathy on the association between ACEs and self-harm among forensic patients. The present study aims to fill this gap by examining the role of psychopathy in the link between ACEs and self-harming behaviours among forensic patients. The study utilized data from individuals under the Ontario Review Board Database (ORB) in 2014 and 2015 . The database was created to capture information from ORB reports for a defined period on study-specific items, including measures of ACEs, psychopathy, and self-harm . The study will test the hypothesis that psychopathy mediates the effect of ACEs on self-harm. Optimally, we hope that findings from the study will extend current knowledge on the etiology and prevention of self-harm among forensic patients and improve the understanding of the interplay of psychopathy on ACE and self-harm in this population. Specific hypotheses based on current literature are listed below. Hypotheses H1: Exposure to ACEs will be positively associated with involvement in self-harming behaviours. H2: Exposure to ACEs will be positively correlated with psychopathy. H3: Higher score for psychopathy will be positively associated with self-harming behaviours. H3: On the basis of the above relationships, psychopathy is likely to mediate the relationship between exposure to ACEs and involvement in self-harming behaviours (Fig. ). ACEs can have a profound negative impact on an individual, particularly those involved in the criminal justice system. Evidence from the literature on forensic psychiatric patients showed that ACEs consistently predicted self-harming behaviour . The greater the number of ACEs, the more likely an individual would engage in self-harming conduct during adulthood . Moreover, some studies have highlighted the importance of the various forms of ACEs (e.g., parent substance use, having a household member(s) with a mental illness, physical abuse, emotional abuse, and history of bullying) to the risk of self-harming behaviour . For example, emotional and sexual abuse were the most common ACEs associated with future self-harming behaviour among incarcerated females . These findings highlight the variability in the detrimental effects that different types of ACEs can have on an individual’s self-harming behaviours based on the nature and severity of ACEs and the personal factors of the victims. Research has shown that specific ACEs, such as physical abuse during childhood, are significant predictors of psychopathic traits, primarily in individuals involved in the criminal justice system . Closely linked is that forensic samples that present with psychopathic traits tend to have high incidences of ACEs , and the severity of the ACE (e.g., more severe childhood physical abuse) was positively associated with more severe psychopathic traits, specifically within the male forensic population . The relationship between psychopathy and self-harm behaviour is complex, and several studies have noted that self-harm shares a bifurcated relationship with factors 1 and 2 of the two-factor model of psychopathy . Factor 2 (captured by items that elicited antisocial behaviours: criminal versatility, impulsiveness, irresponsibility, poor behaviour controls, and juvenile delinquency) of the Psychopathy Checklist-Revised (PCL-R) was significantly associated with engaging in self-harming behaviours compared to Factor 1 (affective-interpersonal deficits) . Similarly, self-harming behaviour was positively related to specific characteristics of psychopathy, such as high impulsivity and sensation-seeking in the forensic population . Similar findings in previous reports in non-clinical samples (e.g., undergraduate students) have demonstrated an association between Factor 2 and suicidal behaviour . This is most likely due to the high loading of impulsivity and antisocial tendencies in Factor 2 . Individuals with severe mental illness (such as those in the forensic psychiatric settings) are more prone to engage in self-harming behaviours . ACEs have been implicated as one of the plausible explanatory factors for self-harming behaviours . Previous studies among forensic populations have demonstrated an increased likelihood of engaging in self-harming behaviours in individuals with a history of exposure to ACEs or those with psychopathic traits . Taking together, it is tenable to suggest that exposure to ACEs can lead to psychopathic traits, which in turn can heavily influence the prevalence of self-harming behaviour. Therefore, there is a need to explore the inter-relatedness of ACEs, psychopathy, and self-harming behaviours in the forensic population. While previous studies have established a link between ACEs and self-harming behaviours , the contribution and interplay of identifiable putative factors on this relationship is yet unclear. Some theories have indicated that psychopathy (or PCL-R scores) can mediate the relationship between ACEs and self-harming behaviour . One potential reason for this relationship is that when someone experiences multiple ACEs, they may develop psychopathic traits such as impulsive behaviour and a lack of emotional regulation to help cope with their situation and previous stressful circumstances or adverse experiences . In turn, impulsive behaviours and antisocial tendencies are positively associated with self-harming behaviours, indicating the mediating effect of psychopathy or PCL-R scores on the risk of self-harm among individuals exposed to ACEs. Self-harm is a significant public health issue that can lead to severe complications, including suicide, infection, psychosocial impairment, and disability . Understanding the factors associated with self-harming behaviours is a significant step toward mitigating the risks, especially among at-risk populations (e.g., individuals in the forensic system). Among forensic patients, previous studies have shown a linkage between ACEs and an increased risk of self-harming behaviours, such as cutting, burning, or hitting oneself . Closely related is that psychopathy may influence the relationship between ACEs and self-harm by affecting emotional regulation, coping skills, and motivation for self-injury in the affected individuals . However, there is scant research on the mediating effects of psychopathy on the association between ACEs and self-harm among forensic patients. The present study aims to fill this gap by examining the role of psychopathy in the link between ACEs and self-harming behaviours among forensic patients. The study utilized data from individuals under the Ontario Review Board Database (ORB) in 2014 and 2015 . The database was created to capture information from ORB reports for a defined period on study-specific items, including measures of ACEs, psychopathy, and self-harm . The study will test the hypothesis that psychopathy mediates the effect of ACEs on self-harm. Optimally, we hope that findings from the study will extend current knowledge on the etiology and prevention of self-harm among forensic patients and improve the understanding of the interplay of psychopathy on ACE and self-harm in this population. Specific hypotheses based on current literature are listed below. Hypotheses H1: Exposure to ACEs will be positively associated with involvement in self-harming behaviours. H2: Exposure to ACEs will be positively correlated with psychopathy. H3: Higher score for psychopathy will be positively associated with self-harming behaviours. H3: On the basis of the above relationships, psychopathy is likely to mediate the relationship between exposure to ACEs and involvement in self-harming behaviours (Fig. ). H1: Exposure to ACEs will be positively associated with involvement in self-harming behaviours. H2: Exposure to ACEs will be positively correlated with psychopathy. H3: Higher score for psychopathy will be positively associated with self-harming behaviours. H3: On the basis of the above relationships, psychopathy is likely to mediate the relationship between exposure to ACEs and involvement in self-harming behaviours (Fig. ). Study design and participants The mediation analysis reported in this study was prepared following the Guideline for Reporting Mediation Analyses (AGReMA) . We included individuals in the databases with complete data from screening with the PCL-R that resulted in scores for psychopathy for the reporting years of 2014 and 2015 ( n = 593) . Individuals in the forensic system are screened with a PCL-R based on clinical indications or the presentation of individuals, particularly those with multiple symptoms signalling psychopathy. The PCL-R is also completed as part of psycho-diagnostics and/or risk assessment for forensic patients. Study variables Exposure (independent variable) Adverse childhood events (ACEs) were considered as the exposure variables. Eight types of ACEs were captured (the details are provided in the study results), and each variable was dichotomized (yes/no). A yes response indicated an exposure to ACEs, and this is scored one. A response of “no’’ indicates the absence of exposure to ACEs and scored zero. The total score for all the ACEs was used to determine the severity of ACEs experienced, and the severity scores ranged between zero and eight. Mediator The Psychopathy Checklist-Revised (PCL-R) score was considered the mediator variable. PCL-R is commonly used to assess the presence of psychopathy traits in an individual . The total score was captured from the ORB reports. Psychiatrists and/or psychologists trained in using the PCL-R assessed for psychopathic traits based on the tool. The total score ranges between 0 and 40, with a higher score indicating a higher risk of violence and psychopathic traits . A cut-off of 30 was used to categorize individuals with psychopathy . Outcome The past year and lifetime history of self-harming behaviour was compiled using a variable that captured self-harm during the reporting year under the ORB system. The variable was reported as yes and no for the presence and absence of these self-harming behaviours, respectively. Covariates The covariates consist of demographic variables (age, gender, level of education, and marital status) and clinical characteristics (lifetime history of substance use, previous psychiatry hospitalization, primary psychiatric diagnosis, and presence of a comorbid psychiatric diagnosis). Data analysis Data were cleaned and analysed using STATA version 16. Continuous variables were presented using means and standard deviation, while categorical variables were presented using frequencies and percentages. Inferential statistics were conducted using chi-square tests and t-tests for categorical and continuous variables, respectively. Pearson correlation coefficients were used to show the relationship between continuous variables. A p -value of < 0.05 was set as statistical significance with a 95% confidence interval. Mediation analysis was based on the Baron and Kenny approach with PCL-R as the mediator, self-harming behaviours as the outcomes, and the total ACEs score as the exposure. The Baron and Kenny approach is based on the following steps: Step 1 – involves regression between the exposure variable (total ACEs) with the mediating variable (PCL-R score); Step 2 – involves regression between the mediating variable and outcome variable (self-harming behaviours); step 3 – involves regression between exposure variable and outcome variable; and the Sobel’s test is conducted. Sobel’s test assesses the statistical significance of the indirect effect of the exposure and outcome through the mediator, using effect size and standard error of steps 1 and 2. If Sobel’s test is significant, then mediation is supported. However, if steps 1 or 2 are statistically significant, but Sobel’s test is not significant, the mediation is partial. Otherwise, mediation is absent. In STATA, we employed the following commands: (i) sem, (ii) estat teffects, and then (iii) medsem to test for mediation. Sensitivity analyses for mediation effects of PCL-R on the relationship between individual types of ACEs and self-harming behaviours (both past year and lifetime) were completed. Therefore, a total of nine mediation tests were performed. The mediation analysis reported in this study was prepared following the Guideline for Reporting Mediation Analyses (AGReMA) . We included individuals in the databases with complete data from screening with the PCL-R that resulted in scores for psychopathy for the reporting years of 2014 and 2015 ( n = 593) . Individuals in the forensic system are screened with a PCL-R based on clinical indications or the presentation of individuals, particularly those with multiple symptoms signalling psychopathy. The PCL-R is also completed as part of psycho-diagnostics and/or risk assessment for forensic patients. Exposure (independent variable) Adverse childhood events (ACEs) were considered as the exposure variables. Eight types of ACEs were captured (the details are provided in the study results), and each variable was dichotomized (yes/no). A yes response indicated an exposure to ACEs, and this is scored one. A response of “no’’ indicates the absence of exposure to ACEs and scored zero. The total score for all the ACEs was used to determine the severity of ACEs experienced, and the severity scores ranged between zero and eight. Mediator The Psychopathy Checklist-Revised (PCL-R) score was considered the mediator variable. PCL-R is commonly used to assess the presence of psychopathy traits in an individual . The total score was captured from the ORB reports. Psychiatrists and/or psychologists trained in using the PCL-R assessed for psychopathic traits based on the tool. The total score ranges between 0 and 40, with a higher score indicating a higher risk of violence and psychopathic traits . A cut-off of 30 was used to categorize individuals with psychopathy . Outcome The past year and lifetime history of self-harming behaviour was compiled using a variable that captured self-harm during the reporting year under the ORB system. The variable was reported as yes and no for the presence and absence of these self-harming behaviours, respectively. Covariates The covariates consist of demographic variables (age, gender, level of education, and marital status) and clinical characteristics (lifetime history of substance use, previous psychiatry hospitalization, primary psychiatric diagnosis, and presence of a comorbid psychiatric diagnosis). Adverse childhood events (ACEs) were considered as the exposure variables. Eight types of ACEs were captured (the details are provided in the study results), and each variable was dichotomized (yes/no). A yes response indicated an exposure to ACEs, and this is scored one. A response of “no’’ indicates the absence of exposure to ACEs and scored zero. The total score for all the ACEs was used to determine the severity of ACEs experienced, and the severity scores ranged between zero and eight. The Psychopathy Checklist-Revised (PCL-R) score was considered the mediator variable. PCL-R is commonly used to assess the presence of psychopathy traits in an individual . The total score was captured from the ORB reports. Psychiatrists and/or psychologists trained in using the PCL-R assessed for psychopathic traits based on the tool. The total score ranges between 0 and 40, with a higher score indicating a higher risk of violence and psychopathic traits . A cut-off of 30 was used to categorize individuals with psychopathy . The past year and lifetime history of self-harming behaviour was compiled using a variable that captured self-harm during the reporting year under the ORB system. The variable was reported as yes and no for the presence and absence of these self-harming behaviours, respectively. The covariates consist of demographic variables (age, gender, level of education, and marital status) and clinical characteristics (lifetime history of substance use, previous psychiatry hospitalization, primary psychiatric diagnosis, and presence of a comorbid psychiatric diagnosis). Data were cleaned and analysed using STATA version 16. Continuous variables were presented using means and standard deviation, while categorical variables were presented using frequencies and percentages. Inferential statistics were conducted using chi-square tests and t-tests for categorical and continuous variables, respectively. Pearson correlation coefficients were used to show the relationship between continuous variables. A p -value of < 0.05 was set as statistical significance with a 95% confidence interval. Mediation analysis was based on the Baron and Kenny approach with PCL-R as the mediator, self-harming behaviours as the outcomes, and the total ACEs score as the exposure. The Baron and Kenny approach is based on the following steps: Step 1 – involves regression between the exposure variable (total ACEs) with the mediating variable (PCL-R score); Step 2 – involves regression between the mediating variable and outcome variable (self-harming behaviours); step 3 – involves regression between exposure variable and outcome variable; and the Sobel’s test is conducted. Sobel’s test assesses the statistical significance of the indirect effect of the exposure and outcome through the mediator, using effect size and standard error of steps 1 and 2. If Sobel’s test is significant, then mediation is supported. However, if steps 1 or 2 are statistically significant, but Sobel’s test is not significant, the mediation is partial. Otherwise, mediation is absent. In STATA, we employed the following commands: (i) sem, (ii) estat teffects, and then (iii) medsem to test for mediation. Sensitivity analyses for mediation effects of PCL-R on the relationship between individual types of ACEs and self-harming behaviours (both past year and lifetime) were completed. Therefore, a total of nine mediation tests were performed. Study sample The data was based on individuals who had complete data on all the main variables of the study. The PCL-R score was normally distributed with a kurtosis of 2.47 and a skewness of 0.23. A total of 48 participants had no recordings of ACEs. These were denoted as missing. The remaining ACE results were normally distributed with a kurtosis of 4.01 and skewness of 1.0. Clinical and sociodemographic characteristics The mean age of the participants was 41.21 (± 12.35) years. A total of 545 (92.37%) individuals were male. Most of the participants were single (96.17%) and had an education level ranging between grades 9 and 13 (57.10%). Most included individuals were being managed for a psychotic disorder [schizophrenia and other psychotic disorders] (84.75%), used psychoactive substances (73.41%) and had a comorbid medical illness (80%). (See Table ) ACEs The average ACEs experienced were 1.22 ± 1.30. Individuals who attained lower levels of education experienced more ACEs than those with a post-secondary level of education. The use of substances of addiction was associated with experiencing significantly more ACEs than those without. Also, individuals with comorbid medical conditions experienced more ACEs than those without. (For details see, Table ). Approximately 61.86% of the participants experienced ACEs. The most experienced ACE was child abuse (31.12%, n = 178), followed by a loss of a parent before 18 years (28.96%, n = 170), and intergenerational abuse (0.51%, n = 3) was the least experienced ACE (Table ). PCL-R score The mean PCL-R score was 15.26 ± 7.42, and there were statistically significant differences in the PCL-R scores based on the study’s participants’ gender, education level, history of substance use, primary psychiatric diagnosis, and having a comorbid medical condition. That is, the score was statistically higher among males compared to females, those with lower education, who used substances, and those with comorbid medical illnesses. For details, see Table . At a cut-off of 30, the prevalence of psychopathy was 7.46% ( n = 44), and no individuals scored between 25 and 30 (a cut-off for psychopathy in some studies). Self-harming behaviours The prevalence of lifetime engagement in self-harming behaviour was 17.80% ( n = 105). More females had proportionally engaged in self-harming behaviours in their lifetime compared to males (31.11% vs. 16.70%, χ 2 = 5 0.90, p -value = 0.015). Also, individuals with comorbid medical illness had engaged more in self-harming behaviours in their lifetime (19.70% vs. 10.17%; χ 2 = 5.86, p -value = 0.015). About 4.43% ( n = 26) had self-harming behaviours over the ORB reporting years explored in this study, and among them, 19 (73.08%) had engaged in self-harming behaviours in the past year. Similar to lifetime self-harming behaviours, self-harming was significantly higher statistically among individuals with neurodevelopmental or personality disorders (Table ). Relationship of ACEs with PCL-R scores, psychopathy, and self-harming behaviours With the exception of intergeneration abuse and staying in a household with an individual having a mental illness before the age of 18, all of the other types of ACEs showed statistically significant higher mean PCL-R scores among individuals who had experienced ACEs than those who did not. There was no statistical difference between individual ACEs and psychopathy. For details, see Table . Among individuals that had ever engaged in self-harming behaviour (lifetime), nine (8.57%) had psychopathy, and there were significantly more individuals with past-year self-harming behaviours without psychopathy compared to those with psychopathy (91.43% vs. 8.57%, χ2 = 7.95, p -value = < 0.001) statistically. Individuals who experienced the following types of ACEs, i.e., had lived in a household with an individual with mental illness below 18 years, lived in a foster home, or had experienced child abuse engaged in more lifetime self-harming behaviours on average had a higher score on PCL-R than those who did not (Table ). There was no statistical difference between individual types of ACEs and past year self-harming behaviours (Table ). Among individuals with past-year self-harming behaviours, three (11.54%) had psychopathy, and there were significantly more individuals with past-year self-harming behaviours without psychopathy compared to those with psychopathy (88.46% vs. 11.54%, χ 2 = 7.95, p -value = 0.005) statistically. Correlation of PCL-R scores, total number of ACEs, and raw age A significant positive correlation ( r = 0.19) existed between experiencing ACEs and having a higher PCL-R score (Table ). Testing the mediating effect of PCL-R on the relationship between ACEs and self-harming behaviours Past year self-harming behaviours In step 1, ACEs were significantly associated with PCL-R scores ( β = 1.085, p -value = < 0.001). In step 2, the PCL-R score was significantly associated with past year self-harming behaviours ( β = -0.003, p -value = 0.005). However, in step 3, ACEs were not significantly associated with past year self-harming behaviours ( β = 0.007, p -value = 0.294). As steps 1, 2, and Sobel’s test are significant, but step 3 is not significant, the mediation is complete. (Supplementary Table ). After controlling for clinical and sociodemographic factors, Step 1 showed that ACEs were significantly associated with the PCL-R scores ( β = 0.680, p -value = 0.002). In step 2, the PCL-R score was significantly associated with past year self-harming behaviours ( β = 0.003, p -value = 0.012). However, in step 3, ACEs were not significantly associated with past year self-harming behaviours ( β = 0.004, p -value = 0.557). As steps 1 and 2 are significant, and neither step 3 nor Sobel’s test of the indirect effect was significant (0.002, p -value = 0.052), the mediation of PCL-R between ACEs and past year self-harming behaviour is partial. (Fig. and Supplementary Table . Lifetime self-harming behaviours In step 1, ACEs were significantly associated with the PCL-R scores ( β = 1.091, p -value < 0.001). In step 2, the PCL-R score was significantly associated with lifetime self-harming behaviours ( β = 0.003, p -value < 0.001). In step 3, ACEs were also significantly associated with lifetime self-harming behaviours ( β = 0.026, p -value = 0.033). The mediation is partial as steps 1, 2, 3, and Sobel’s test are significant. (Supplementary Table ). After controlling for clinical and sociodemographic factors, in Step 1, ACEs were significantly associated with the PCL-R scores ( β = 0.678, p -value = 0.002). In step 2, the PCL-R score was significantly associated with lifetime self-harming behaviours ( β = 0.010, p -value < 0.001). However, in step 3, ACEs were not significantly associated with lifetime self-harming behaviours ( β = 0.017, p -value = 0.167). As steps 1, 2, and Sobel’s test (0.01, p -value = 0.013) are significant, but step 3 was not significant, the mediation of PCL-R between ACEs and lifetime self-harming behaviour is complete. (Fig. and Supplementary Table ). Sensitivity analysis for mediation effect of PCL-R score on the relationships of total and types of ACEs with self-harm behaviours The mediating effect of the PCL-R score for the total ACEs almost mirrored that of individuals who had experienced child abuse and incarceration of a household member. (See Table ). Complete mediation was observed among those with lifetime self-harm and having a history of child abuse or a household member incarcerated. The details of the sensitivity analysis are presented in Supplementary Tables and . The data was based on individuals who had complete data on all the main variables of the study. The PCL-R score was normally distributed with a kurtosis of 2.47 and a skewness of 0.23. A total of 48 participants had no recordings of ACEs. These were denoted as missing. The remaining ACE results were normally distributed with a kurtosis of 4.01 and skewness of 1.0. Clinical and sociodemographic characteristics The mean age of the participants was 41.21 (± 12.35) years. A total of 545 (92.37%) individuals were male. Most of the participants were single (96.17%) and had an education level ranging between grades 9 and 13 (57.10%). Most included individuals were being managed for a psychotic disorder [schizophrenia and other psychotic disorders] (84.75%), used psychoactive substances (73.41%) and had a comorbid medical illness (80%). (See Table ) ACEs The average ACEs experienced were 1.22 ± 1.30. Individuals who attained lower levels of education experienced more ACEs than those with a post-secondary level of education. The use of substances of addiction was associated with experiencing significantly more ACEs than those without. Also, individuals with comorbid medical conditions experienced more ACEs than those without. (For details see, Table ). Approximately 61.86% of the participants experienced ACEs. The most experienced ACE was child abuse (31.12%, n = 178), followed by a loss of a parent before 18 years (28.96%, n = 170), and intergenerational abuse (0.51%, n = 3) was the least experienced ACE (Table ). PCL-R score The mean PCL-R score was 15.26 ± 7.42, and there were statistically significant differences in the PCL-R scores based on the study’s participants’ gender, education level, history of substance use, primary psychiatric diagnosis, and having a comorbid medical condition. That is, the score was statistically higher among males compared to females, those with lower education, who used substances, and those with comorbid medical illnesses. For details, see Table . At a cut-off of 30, the prevalence of psychopathy was 7.46% ( n = 44), and no individuals scored between 25 and 30 (a cut-off for psychopathy in some studies). Self-harming behaviours The prevalence of lifetime engagement in self-harming behaviour was 17.80% ( n = 105). More females had proportionally engaged in self-harming behaviours in their lifetime compared to males (31.11% vs. 16.70%, χ 2 = 5 0.90, p -value = 0.015). Also, individuals with comorbid medical illness had engaged more in self-harming behaviours in their lifetime (19.70% vs. 10.17%; χ 2 = 5.86, p -value = 0.015). About 4.43% ( n = 26) had self-harming behaviours over the ORB reporting years explored in this study, and among them, 19 (73.08%) had engaged in self-harming behaviours in the past year. Similar to lifetime self-harming behaviours, self-harming was significantly higher statistically among individuals with neurodevelopmental or personality disorders (Table ). The mean age of the participants was 41.21 (± 12.35) years. A total of 545 (92.37%) individuals were male. Most of the participants were single (96.17%) and had an education level ranging between grades 9 and 13 (57.10%). Most included individuals were being managed for a psychotic disorder [schizophrenia and other psychotic disorders] (84.75%), used psychoactive substances (73.41%) and had a comorbid medical illness (80%). (See Table ) The average ACEs experienced were 1.22 ± 1.30. Individuals who attained lower levels of education experienced more ACEs than those with a post-secondary level of education. The use of substances of addiction was associated with experiencing significantly more ACEs than those without. Also, individuals with comorbid medical conditions experienced more ACEs than those without. (For details see, Table ). Approximately 61.86% of the participants experienced ACEs. The most experienced ACE was child abuse (31.12%, n = 178), followed by a loss of a parent before 18 years (28.96%, n = 170), and intergenerational abuse (0.51%, n = 3) was the least experienced ACE (Table ). The mean PCL-R score was 15.26 ± 7.42, and there were statistically significant differences in the PCL-R scores based on the study’s participants’ gender, education level, history of substance use, primary psychiatric diagnosis, and having a comorbid medical condition. That is, the score was statistically higher among males compared to females, those with lower education, who used substances, and those with comorbid medical illnesses. For details, see Table . At a cut-off of 30, the prevalence of psychopathy was 7.46% ( n = 44), and no individuals scored between 25 and 30 (a cut-off for psychopathy in some studies). The prevalence of lifetime engagement in self-harming behaviour was 17.80% ( n = 105). More females had proportionally engaged in self-harming behaviours in their lifetime compared to males (31.11% vs. 16.70%, χ 2 = 5 0.90, p -value = 0.015). Also, individuals with comorbid medical illness had engaged more in self-harming behaviours in their lifetime (19.70% vs. 10.17%; χ 2 = 5.86, p -value = 0.015). About 4.43% ( n = 26) had self-harming behaviours over the ORB reporting years explored in this study, and among them, 19 (73.08%) had engaged in self-harming behaviours in the past year. Similar to lifetime self-harming behaviours, self-harming was significantly higher statistically among individuals with neurodevelopmental or personality disorders (Table ). With the exception of intergeneration abuse and staying in a household with an individual having a mental illness before the age of 18, all of the other types of ACEs showed statistically significant higher mean PCL-R scores among individuals who had experienced ACEs than those who did not. There was no statistical difference between individual ACEs and psychopathy. For details, see Table . Among individuals that had ever engaged in self-harming behaviour (lifetime), nine (8.57%) had psychopathy, and there were significantly more individuals with past-year self-harming behaviours without psychopathy compared to those with psychopathy (91.43% vs. 8.57%, χ2 = 7.95, p -value = < 0.001) statistically. Individuals who experienced the following types of ACEs, i.e., had lived in a household with an individual with mental illness below 18 years, lived in a foster home, or had experienced child abuse engaged in more lifetime self-harming behaviours on average had a higher score on PCL-R than those who did not (Table ). There was no statistical difference between individual types of ACEs and past year self-harming behaviours (Table ). Among individuals with past-year self-harming behaviours, three (11.54%) had psychopathy, and there were significantly more individuals with past-year self-harming behaviours without psychopathy compared to those with psychopathy (88.46% vs. 11.54%, χ 2 = 7.95, p -value = 0.005) statistically. A significant positive correlation ( r = 0.19) existed between experiencing ACEs and having a higher PCL-R score (Table ). Past year self-harming behaviours In step 1, ACEs were significantly associated with PCL-R scores ( β = 1.085, p -value = < 0.001). In step 2, the PCL-R score was significantly associated with past year self-harming behaviours ( β = -0.003, p -value = 0.005). However, in step 3, ACEs were not significantly associated with past year self-harming behaviours ( β = 0.007, p -value = 0.294). As steps 1, 2, and Sobel’s test are significant, but step 3 is not significant, the mediation is complete. (Supplementary Table ). After controlling for clinical and sociodemographic factors, Step 1 showed that ACEs were significantly associated with the PCL-R scores ( β = 0.680, p -value = 0.002). In step 2, the PCL-R score was significantly associated with past year self-harming behaviours ( β = 0.003, p -value = 0.012). However, in step 3, ACEs were not significantly associated with past year self-harming behaviours ( β = 0.004, p -value = 0.557). As steps 1 and 2 are significant, and neither step 3 nor Sobel’s test of the indirect effect was significant (0.002, p -value = 0.052), the mediation of PCL-R between ACEs and past year self-harming behaviour is partial. (Fig. and Supplementary Table . Lifetime self-harming behaviours In step 1, ACEs were significantly associated with the PCL-R scores ( β = 1.091, p -value < 0.001). In step 2, the PCL-R score was significantly associated with lifetime self-harming behaviours ( β = 0.003, p -value < 0.001). In step 3, ACEs were also significantly associated with lifetime self-harming behaviours ( β = 0.026, p -value = 0.033). The mediation is partial as steps 1, 2, 3, and Sobel’s test are significant. (Supplementary Table ). After controlling for clinical and sociodemographic factors, in Step 1, ACEs were significantly associated with the PCL-R scores ( β = 0.678, p -value = 0.002). In step 2, the PCL-R score was significantly associated with lifetime self-harming behaviours ( β = 0.010, p -value < 0.001). However, in step 3, ACEs were not significantly associated with lifetime self-harming behaviours ( β = 0.017, p -value = 0.167). As steps 1, 2, and Sobel’s test (0.01, p -value = 0.013) are significant, but step 3 was not significant, the mediation of PCL-R between ACEs and lifetime self-harming behaviour is complete. (Fig. and Supplementary Table ). In step 1, ACEs were significantly associated with PCL-R scores ( β = 1.085, p -value = < 0.001). In step 2, the PCL-R score was significantly associated with past year self-harming behaviours ( β = -0.003, p -value = 0.005). However, in step 3, ACEs were not significantly associated with past year self-harming behaviours ( β = 0.007, p -value = 0.294). As steps 1, 2, and Sobel’s test are significant, but step 3 is not significant, the mediation is complete. (Supplementary Table ). After controlling for clinical and sociodemographic factors, Step 1 showed that ACEs were significantly associated with the PCL-R scores ( β = 0.680, p -value = 0.002). In step 2, the PCL-R score was significantly associated with past year self-harming behaviours ( β = 0.003, p -value = 0.012). However, in step 3, ACEs were not significantly associated with past year self-harming behaviours ( β = 0.004, p -value = 0.557). As steps 1 and 2 are significant, and neither step 3 nor Sobel’s test of the indirect effect was significant (0.002, p -value = 0.052), the mediation of PCL-R between ACEs and past year self-harming behaviour is partial. (Fig. and Supplementary Table . In step 1, ACEs were significantly associated with the PCL-R scores ( β = 1.091, p -value < 0.001). In step 2, the PCL-R score was significantly associated with lifetime self-harming behaviours ( β = 0.003, p -value < 0.001). In step 3, ACEs were also significantly associated with lifetime self-harming behaviours ( β = 0.026, p -value = 0.033). The mediation is partial as steps 1, 2, 3, and Sobel’s test are significant. (Supplementary Table ). After controlling for clinical and sociodemographic factors, in Step 1, ACEs were significantly associated with the PCL-R scores ( β = 0.678, p -value = 0.002). In step 2, the PCL-R score was significantly associated with lifetime self-harming behaviours ( β = 0.010, p -value < 0.001). However, in step 3, ACEs were not significantly associated with lifetime self-harming behaviours ( β = 0.017, p -value = 0.167). As steps 1, 2, and Sobel’s test (0.01, p -value = 0.013) are significant, but step 3 was not significant, the mediation of PCL-R between ACEs and lifetime self-harming behaviour is complete. (Fig. and Supplementary Table ). The mediating effect of the PCL-R score for the total ACEs almost mirrored that of individuals who had experienced child abuse and incarceration of a household member. (See Table ). Complete mediation was observed among those with lifetime self-harm and having a history of child abuse or a household member incarcerated. The details of the sensitivity analysis are presented in Supplementary Tables and . Overview of the study findings The present study found a partial mediating effect of psychopathy on the relationship between ACEs and past-year self-harming behaviours. However, the mediation effect was complete in relation to lifetime self-harming behaviours. Overall, the mediating effect of psychopathy on the relationship between total ACEs and self-harming behaviours almost mirrored that of individuals who had experienced child abuse and incarceration of a household member. Other interesting findings from the study and the implications are discussed below. Prevalence of psychopathy, distribution of PCL-R score, and the associated factors Out of the 590 eligible individuals who were included, approximately 7.49% had psychopathy based on a cut-off score of 30. The prevalence rate in the present study is higher than the pooled prevalence rate of 1.2% reported in a meta-analytic review of studies conducted among the general population using the same tool and cut-off score . However, the prevalence reported in the current study is lower than the pooled prevalence of 27.8% for psychopathy from studies conducted among individuals in the correctional system charged with homicide . The differences in the rates of psychopathy between our study and the cited studies may be attributed to the differences in the characteristics of the study populations. For example, it is possible that forensic patients (included in our study) are individuals most likely to be diagnosed primarily with severe mental illness , and fewer of them may have psychopathy compared to offenders involved in homicide. Similarly, a lower PCL-R mean score was observed in our study participants compared to individuals convicted of homicide (15.26 ± 7.42 vs. 21.2 ± 5.3) . While the prevalence of psychopathy in our study is lower compared to the correctional population with homicide, the results were close to those of the general population . In keeping with the findings documented in previous meta-analytic studies, the mean score of the measure (PCL-R) for psychopathy in the present study was higher among males than females . A detailed explanation for this difference has been described by Beryl et al. . The present study also found that the average PCL-R scores decreased with an increase in the level of education. This may be attributed to the idea that antisocial behaviours, disregard for social norms, and impulsive behaviours that are associated with psychopathy may lead to poor academic performance and, in turn, lower academic achievements . In addition, psychopathic characteristics may lead to higher chances of involvement with the criminal justice system, which may negatively affect an individual’s progress in school. Contrary findings have been recorded for certain professions, especially in business, where individuals with higher mean scores on the PCL-R were high academic achievers . The mean score for psychopathy was also higher among individuals with two interlinked conditions, i.e., substance use history and comorbid medical conditions , a relationship that may be attributed to the complicated lifestyle (e.g., not adhering to rules and instruction, such as failure to stay away from dangerous substances or follow medication adherence often adopted by individuals with higher psychopathic traits.) Prevalence of ACEs and the associated factors Over 60% of the study participants experienced ACEs, with most experiencing child abuse. The high prevalence rate of ACEs in the present study is similar to the findings among forensic populations in other parts of the globe, such as Sweden (57.2%) , USA (79.4%) , and UK (82.8%) . It is important to note that the average number of reported ACEs events (1.22±1.30) was lower in this sample than in previous studies that employed the same method of identifying ACEs, such as 2.63±2.3 among a sample of 157 forensic psychiatric patients from the USA . The difference may be attributed to the smaller number of ACEs identified in the current study (8), while many studies identify more. The mean for the total number of ACEs experienced decreased with an increase in the education level, a finding consistent with other previous studies . A plausible explanation may be that ACEs have been linked with impairment of cognitive function, working memory, attention, and language acquisition, which can lead to poorer academic performance . However, it is important to note that some studies have reported no significant impact of ACEs on academic performance, which are findings attributed to individuals’ resilience and protective factors . Similar to individuals who scored high on PCL-R, those with a higher mean number for ACEs had a history of substance use and suffered from a comorbid medical condition. In the present study, an increase in ACEs correlated positively with PCL-R score. Existing literature consistently reported a link between ACEs and psychopathy . These findings further support the notion that a high number of individuals with ACEs are more likely to have a significantly higher PCL-R score, except for individuals with ACE resulting specifically from intergeneration abuse and staying in a household with an individual diagnosed with mental illness before the age of 18 in this study. Prevalence of self-harming behaviours and the associated factors Among the study participants, approximately 5% had self-harming behaviours during the reporting years under study. This prevalence is several folds lower than reported in other forensic settings, including Sweden, the USA, and the UK, with prevalence ranging between 36.0% and 68.4% . The low prevalence in the present study may be attributed to the nature of the sample population, made up mainly of individuals with psychopathy based on PCL-R evaluation. By practice, not every forensic psychiatric patient in Ontario is assessed using a PCL-R. Those deemed with high suspicion of having psychopathy get assessed, thus skewing the number that are more likely to screen positive for psychopathy or score highly on the PCL-R. These individuals with higher scores may score highly on both Factor 1 and 2 of the PCL-R. With individuals who met the criteria for psychopathy in the present study having experienced fewer incidences of self-harming behaviours than those who didn’t. We speculate that the influence of scoring highly on the specific PCL-R items that load on factor 1 (i.e., involving items related to interpersonal and affective deficits of psychopathy, including shallow affect, superficial charm, manipulativeness, lack of empathy), which are associated with less self-harming behaviours led to the lower prevalence observed. Mediating role of PCL-R score on the effect of ACEs on self-harming behaviours The present study found a partial mediation role of PCL-R score on the effects of total ACEs on past year self-harming behaviours after controlling for other covariates. This indicates that other variables may be explanatory of the effects of ACEs on self-harming behaviour in addition to PCL-R score, such as biological factors like inflammation , an aspect that is outside the scope of the present study. Consequently, further research is warranted to fully understand the interplay of psychopathic traits and other putative factors on the relationship of ACEs with self-harming behaviours among forensic patients. Again, the partial mediation may be due to the tool used (i.e., PCL-R), which may not capture all aspects of psychopathy or personality that are relevant to self-harm. For example, some researchers have argued that the PCL-R may not be adequate to measure affective and interpersonal dimensions of psychopathy, such as callousness, narcissism, or Machiavellianism, that may relate to self-harm . On the other hand, the mediating relationship of PCL-R on the effects of ACEs on self-harming may potentially be since individuals who have experienced ACEs may develop psychopathic traits as a maladaptive coping mechanism . The psychopathic traits (captured by the PCL-R) may, in turn, increase the likelihood of engaging in self-harming behaviours as a form of emotional regulatory mechanism or to exert control . Based on sensitivity analysis, psychopathy loaded higher as a mediator for self-harming behaviours for individuals with ACEs from living in a foster house, having a family member previously incarcerated, and having a history of child abuse. These findings may be explained by several factors, including inherited gene influence (genes that influence psychopathy and or involvement in self-harming behaviours), adopting of maladaptive coping style, and vulnerability index. Our study findings among individuals with a family member incarcerated before 18 years may be related to the interplay of genetics (inheritance) and learning of maladaptive coping strategies the family member who ended up incarcerated used. This nature and nurture effect may lead to using self-harming behaviours as a coping skill, developing psychopathic traits, and ending up within the correctional justice system. Research has implicated genetic links for psychopathy among multiple family members . Individuals who stay in a foster home may be exposed to various forms of childhood trauma (e.g., child abuse, neglect, instabilities, etc.) that may impact their emotional development and attachment security . Consequently, they are vulnerable to developing emotional dysregulation and psychopathic traits (such as lack of empathy, remorse, or guilt) that are precursors for risky behaviours . Due to the emotional dysregulation and inadequate development of coping skills among these children, some may use self-harming behaviours to cope with negative emotions, express anger or frustration, seek attention or validation, or manipulate others . In addition, individuals who go through the foster care system may have poor social support and limited access to quality mental health services for children. Implicitly, they are isolated, helpless, and hopeless, and engaging in self-harming behaviours becomes more likely as a coping mechanism. There are several potential explanations for the complete mediating effect of psychopathy on the linkage between being in foster care and self-harming behaviours. For example, some individuals in foster care may have brain damage from encountering severe life experiences while in the system and develop psychopathic traits that increase their vulnerability to engage in self-harming behaviours . Limitations The following limitations should be considered in interpreting these study findings: (1) the individual facets of the PCL-R were not captured and used in the current analysis despite their strong and unique relationship with the variables assessed. Future studies should explore the interplay of the PCL-R facets on the relationship of ACEs with self-harming behaviours so that a targeted approach can be designed to mitigate the effects of such specific items as part of the interventions to reduce self-harming behaviours; (2) Self-harm was based on witnessed and reported incidents. This may be affected by the quality of information captured in the ORB report, and under-reporting of the incidents is possible; (3) The cross-sectional study design also limits inferences on causality, and a more robust prospective design should be employed in future studies, and (4) There is the likelihood of the introduction of systematic bias in the study since the individuals who are selected to have a PCL-R are dependent on clinician judgment, institutional policy, or requirement for ORB annual hearing. These may leave out some individuals who may score differently on the PCL-R, potentially leading to an altered picture of the mediating relationship captured. Lastly, despite the popularity of the use of the PCL-R tool among forensic psychiatry patients in Ontario, no available data has validated its use among patients with antisocial personality disorder, whose presentation and etiology may be similar to psychopathy . Yet, they may pose varying risks of self-harming or a history of having been exposed to ACEs. Conclusions Among forensic patients in Ontario, psychopathy plays a mediating role in the effects of ACEs on engaging in self-harming behaviours. This role is experienced mainly by individuals who had ACEs involving child abuse, incarceration of a household member, and having lived in a foster home. For effective intervention to reduce self-harming behaviours, adequate attention should be given to the effects of identifiable mediators. Further studies are recommended to explore the interplay of specific factors or items of PCL-R on the risk attributable to ACEs for incidents of self-harming behaviours in the forensic population. The present study found a partial mediating effect of psychopathy on the relationship between ACEs and past-year self-harming behaviours. However, the mediation effect was complete in relation to lifetime self-harming behaviours. Overall, the mediating effect of psychopathy on the relationship between total ACEs and self-harming behaviours almost mirrored that of individuals who had experienced child abuse and incarceration of a household member. Other interesting findings from the study and the implications are discussed below. Out of the 590 eligible individuals who were included, approximately 7.49% had psychopathy based on a cut-off score of 30. The prevalence rate in the present study is higher than the pooled prevalence rate of 1.2% reported in a meta-analytic review of studies conducted among the general population using the same tool and cut-off score . However, the prevalence reported in the current study is lower than the pooled prevalence of 27.8% for psychopathy from studies conducted among individuals in the correctional system charged with homicide . The differences in the rates of psychopathy between our study and the cited studies may be attributed to the differences in the characteristics of the study populations. For example, it is possible that forensic patients (included in our study) are individuals most likely to be diagnosed primarily with severe mental illness , and fewer of them may have psychopathy compared to offenders involved in homicide. Similarly, a lower PCL-R mean score was observed in our study participants compared to individuals convicted of homicide (15.26 ± 7.42 vs. 21.2 ± 5.3) . While the prevalence of psychopathy in our study is lower compared to the correctional population with homicide, the results were close to those of the general population . In keeping with the findings documented in previous meta-analytic studies, the mean score of the measure (PCL-R) for psychopathy in the present study was higher among males than females . A detailed explanation for this difference has been described by Beryl et al. . The present study also found that the average PCL-R scores decreased with an increase in the level of education. This may be attributed to the idea that antisocial behaviours, disregard for social norms, and impulsive behaviours that are associated with psychopathy may lead to poor academic performance and, in turn, lower academic achievements . In addition, psychopathic characteristics may lead to higher chances of involvement with the criminal justice system, which may negatively affect an individual’s progress in school. Contrary findings have been recorded for certain professions, especially in business, where individuals with higher mean scores on the PCL-R were high academic achievers . The mean score for psychopathy was also higher among individuals with two interlinked conditions, i.e., substance use history and comorbid medical conditions , a relationship that may be attributed to the complicated lifestyle (e.g., not adhering to rules and instruction, such as failure to stay away from dangerous substances or follow medication adherence often adopted by individuals with higher psychopathic traits.) Over 60% of the study participants experienced ACEs, with most experiencing child abuse. The high prevalence rate of ACEs in the present study is similar to the findings among forensic populations in other parts of the globe, such as Sweden (57.2%) , USA (79.4%) , and UK (82.8%) . It is important to note that the average number of reported ACEs events (1.22±1.30) was lower in this sample than in previous studies that employed the same method of identifying ACEs, such as 2.63±2.3 among a sample of 157 forensic psychiatric patients from the USA . The difference may be attributed to the smaller number of ACEs identified in the current study (8), while many studies identify more. The mean for the total number of ACEs experienced decreased with an increase in the education level, a finding consistent with other previous studies . A plausible explanation may be that ACEs have been linked with impairment of cognitive function, working memory, attention, and language acquisition, which can lead to poorer academic performance . However, it is important to note that some studies have reported no significant impact of ACEs on academic performance, which are findings attributed to individuals’ resilience and protective factors . Similar to individuals who scored high on PCL-R, those with a higher mean number for ACEs had a history of substance use and suffered from a comorbid medical condition. In the present study, an increase in ACEs correlated positively with PCL-R score. Existing literature consistently reported a link between ACEs and psychopathy . These findings further support the notion that a high number of individuals with ACEs are more likely to have a significantly higher PCL-R score, except for individuals with ACE resulting specifically from intergeneration abuse and staying in a household with an individual diagnosed with mental illness before the age of 18 in this study. Among the study participants, approximately 5% had self-harming behaviours during the reporting years under study. This prevalence is several folds lower than reported in other forensic settings, including Sweden, the USA, and the UK, with prevalence ranging between 36.0% and 68.4% . The low prevalence in the present study may be attributed to the nature of the sample population, made up mainly of individuals with psychopathy based on PCL-R evaluation. By practice, not every forensic psychiatric patient in Ontario is assessed using a PCL-R. Those deemed with high suspicion of having psychopathy get assessed, thus skewing the number that are more likely to screen positive for psychopathy or score highly on the PCL-R. These individuals with higher scores may score highly on both Factor 1 and 2 of the PCL-R. With individuals who met the criteria for psychopathy in the present study having experienced fewer incidences of self-harming behaviours than those who didn’t. We speculate that the influence of scoring highly on the specific PCL-R items that load on factor 1 (i.e., involving items related to interpersonal and affective deficits of psychopathy, including shallow affect, superficial charm, manipulativeness, lack of empathy), which are associated with less self-harming behaviours led to the lower prevalence observed. The present study found a partial mediation role of PCL-R score on the effects of total ACEs on past year self-harming behaviours after controlling for other covariates. This indicates that other variables may be explanatory of the effects of ACEs on self-harming behaviour in addition to PCL-R score, such as biological factors like inflammation , an aspect that is outside the scope of the present study. Consequently, further research is warranted to fully understand the interplay of psychopathic traits and other putative factors on the relationship of ACEs with self-harming behaviours among forensic patients. Again, the partial mediation may be due to the tool used (i.e., PCL-R), which may not capture all aspects of psychopathy or personality that are relevant to self-harm. For example, some researchers have argued that the PCL-R may not be adequate to measure affective and interpersonal dimensions of psychopathy, such as callousness, narcissism, or Machiavellianism, that may relate to self-harm . On the other hand, the mediating relationship of PCL-R on the effects of ACEs on self-harming may potentially be since individuals who have experienced ACEs may develop psychopathic traits as a maladaptive coping mechanism . The psychopathic traits (captured by the PCL-R) may, in turn, increase the likelihood of engaging in self-harming behaviours as a form of emotional regulatory mechanism or to exert control . Based on sensitivity analysis, psychopathy loaded higher as a mediator for self-harming behaviours for individuals with ACEs from living in a foster house, having a family member previously incarcerated, and having a history of child abuse. These findings may be explained by several factors, including inherited gene influence (genes that influence psychopathy and or involvement in self-harming behaviours), adopting of maladaptive coping style, and vulnerability index. Our study findings among individuals with a family member incarcerated before 18 years may be related to the interplay of genetics (inheritance) and learning of maladaptive coping strategies the family member who ended up incarcerated used. This nature and nurture effect may lead to using self-harming behaviours as a coping skill, developing psychopathic traits, and ending up within the correctional justice system. Research has implicated genetic links for psychopathy among multiple family members . Individuals who stay in a foster home may be exposed to various forms of childhood trauma (e.g., child abuse, neglect, instabilities, etc.) that may impact their emotional development and attachment security . Consequently, they are vulnerable to developing emotional dysregulation and psychopathic traits (such as lack of empathy, remorse, or guilt) that are precursors for risky behaviours . Due to the emotional dysregulation and inadequate development of coping skills among these children, some may use self-harming behaviours to cope with negative emotions, express anger or frustration, seek attention or validation, or manipulate others . In addition, individuals who go through the foster care system may have poor social support and limited access to quality mental health services for children. Implicitly, they are isolated, helpless, and hopeless, and engaging in self-harming behaviours becomes more likely as a coping mechanism. There are several potential explanations for the complete mediating effect of psychopathy on the linkage between being in foster care and self-harming behaviours. For example, some individuals in foster care may have brain damage from encountering severe life experiences while in the system and develop psychopathic traits that increase their vulnerability to engage in self-harming behaviours . The following limitations should be considered in interpreting these study findings: (1) the individual facets of the PCL-R were not captured and used in the current analysis despite their strong and unique relationship with the variables assessed. Future studies should explore the interplay of the PCL-R facets on the relationship of ACEs with self-harming behaviours so that a targeted approach can be designed to mitigate the effects of such specific items as part of the interventions to reduce self-harming behaviours; (2) Self-harm was based on witnessed and reported incidents. This may be affected by the quality of information captured in the ORB report, and under-reporting of the incidents is possible; (3) The cross-sectional study design also limits inferences on causality, and a more robust prospective design should be employed in future studies, and (4) There is the likelihood of the introduction of systematic bias in the study since the individuals who are selected to have a PCL-R are dependent on clinician judgment, institutional policy, or requirement for ORB annual hearing. These may leave out some individuals who may score differently on the PCL-R, potentially leading to an altered picture of the mediating relationship captured. Lastly, despite the popularity of the use of the PCL-R tool among forensic psychiatry patients in Ontario, no available data has validated its use among patients with antisocial personality disorder, whose presentation and etiology may be similar to psychopathy . Yet, they may pose varying risks of self-harming or a history of having been exposed to ACEs. Among forensic patients in Ontario, psychopathy plays a mediating role in the effects of ACEs on engaging in self-harming behaviours. This role is experienced mainly by individuals who had ACEs involving child abuse, incarceration of a household member, and having lived in a foster home. For effective intervention to reduce self-harming behaviours, adequate attention should be given to the effects of identifiable mediators. Further studies are recommended to explore the interplay of specific factors or items of PCL-R on the risk attributable to ACEs for incidents of self-harming behaviours in the forensic population. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 |
History of ocular plastic surgery in Brazil -
Memories | 020b121b-ce62-4ffb-9163-d7bac802dde8 | 11884369 | Surgical Procedures, Operative[mh] | The Code of Hammurabi of the King of Babylon (2250 BC) provides the first reference
to ocular plastic surgery. From the time of Hippocrates (460-370 BC) to the
nineteenth century, there are only two noteworthy reports of
blepharoplastic-cosmetic reconstructions in the eyelid region made by Celsus AC (25
BC to 50 AD) and Ambroise Paré (1509-1590). The treatment of orbital and
eyelid lesions remained outdated until the beginning of the Contemporary Age when
the works of Von Graefe (1787-1840), a pioneer of ophthalmology and creator of
several facial plastic surgeries, were published in 1818. He was later followed by
Dieffenbach JF (1792-1847), the author of several procedures for strabismus and
reconstruction of the lower eyelids that were published in 1829. Dr. Von JF
Dieffenbach is generally regarded as the father of plastic surgery.
“War is the only proper school for surgeons,” Hippocrates. Plastic surgeons have been influenced by the horrors of the two world wars
(1914-1918; 1939-1945) and were overwhelmed and unable to deal with the thousands of
mutilated patients. The orbital and eyelid lesions were referred to eye surgeons at
that time. The demands and challenges of World War I forced John Martin Wheeler
(1879-1938) to develop reconstructive surgical techniques that were published in
1920 . Wheeler was the
first to provide courses to teach ophthalmic plastic surgery and earned the title
“Father of Ophthalmic Plastic Surgery.” During World War II, when ocular plastic surgery was recognized as a subspecialty of
ophthalmology in the United States, Wendell L. Hughes (1900-1994), one of Wheeler’s
students, improved oculoplastic surgery not only in publications for disseminating
his experiences but also by teaching and training several famous surgeons. In 1969,
he founded the American Society of Ophthalmic Plastic and Reconstructive Surgery
(ASOPRS) and was its first president. In Britain, World War II allowed Hyla B. Stallard (1901-1973) to play the same role
in pioneering and significantly contributing to the progress of oculoplastics. Among
his illustrious followers, it is worth mentioning John Clark Mustardé, one of
the pillars of modern oculoplastic surgery. Mustardé exercised significant
influence on the foundation of the Brazilian society.
JOSÉ LOURENÇO DE MAGALHÃES Born in Sergipe, Brazil, José Lourenço de Magalhães
graduated as a Doctor of Medicine at the pioneering Medical School of Salvador,
Bahia. As member of the Imperial Academy of Medicine, he also authored the first
publication concerning the correctness of an eyelid deformity, viz., “Surgery of
ectropion,” which was published by the Medical Gazette of Bahia (1864). DONATO VALLE Born in Varginha, Brazil, and graduated in otolaryngology, Donato Valle honed his
surgical expertise at the Penido Burnier Institute. He presented his first work
at the first Brazilian Congress of Ophthalmology (“Dacryocistite and its
treatment”) in 1935. He perfected the technique of transcutaneous
dacryorhinostomy described by Dupuy Dutemps, posting his procedure in the
Arquivos Brasileiros de Oftalmologia, vol. 3:101-125, 1940. He also contributed
with new tools that actually facilitated the execution of the procedure. Valle innovated surgery of the lacrimal system in Brazil, publishing several
works in the 1930s and 1940s. IVO HÉLCIO JARDIM DE CAMPOS PITANGUY Born in Belo Horizonte, Brazil, Ivo Pitanguy is considered as the most renowned
plastic surgeon in Brazil. He graduated in medicine in 1946, spending the first
years working in the most famous plastic surgery centers in the world. He
acquired extensive knowledge and experience. Returning to Brazil in the late
1940s, a time when plastic surgery was not yet recognized as a medical
specialty, he created the service in the Santa Casa de Misericórdia
Hospital in Rio de Janeiro. There he began to train national and foreign experts
and taught training courses including in the eyelid area. His book, “Atlas of
Eyelid Surgery,” published in 1994, is an indicator of his experience and
attention to the importance of the orbital-palpebral area in the context of
esthetics and facial beauty.
Born in Sergipe, Brazil, José Lourenço de Magalhães
graduated as a Doctor of Medicine at the pioneering Medical School of Salvador,
Bahia. As member of the Imperial Academy of Medicine, he also authored the first
publication concerning the correctness of an eyelid deformity, viz., “Surgery of
ectropion,” which was published by the Medical Gazette of Bahia (1864).
Born in Varginha, Brazil, and graduated in otolaryngology, Donato Valle honed his
surgical expertise at the Penido Burnier Institute. He presented his first work
at the first Brazilian Congress of Ophthalmology (“Dacryocistite and its
treatment”) in 1935. He perfected the technique of transcutaneous
dacryorhinostomy described by Dupuy Dutemps, posting his procedure in the
Arquivos Brasileiros de Oftalmologia, vol. 3:101-125, 1940. He also contributed
with new tools that actually facilitated the execution of the procedure. Valle innovated surgery of the lacrimal system in Brazil, publishing several
works in the 1930s and 1940s.
Born in Belo Horizonte, Brazil, Ivo Pitanguy is considered as the most renowned
plastic surgeon in Brazil. He graduated in medicine in 1946, spending the first
years working in the most famous plastic surgery centers in the world. He
acquired extensive knowledge and experience. Returning to Brazil in the late
1940s, a time when plastic surgery was not yet recognized as a medical
specialty, he created the service in the Santa Casa de Misericórdia
Hospital in Rio de Janeiro. There he began to train national and foreign experts
and taught training courses including in the eyelid area. His book, “Atlas of
Eyelid Surgery,” published in 1994, is an indicator of his experience and
attention to the importance of the orbital-palpebral area in the context of
esthetics and facial beauty.
BYRON CAPLEESE SMITH Byron Smith was the Director of the Department of Ophthalmology at New York
Medical College and Emeritus Chief Surgeon of the Division of Ophthalmic Plastic
Surgery from Manhattan Eye, Ear and Throat Hospital. He was one of Hughes’
disciples during World War II. Together, they founded the first clinic entirely
devoted to oculoplastic surgery at New York University in 1941. His studies on
the mechanism of orbital fractures helped systematize the treatment of orbital
trauma. Dr. Smith answered to several calls of Brazilian ophthalmology to
minister courses and lectures at meetings and was always forthcoming to all who
wanted his internships and teachings. BERNARD A. WEIL Bernard Weil was one of the pillars of dacryology. Alongside Benjamin Milder, he
published the book “The Lacrimal Syeestem” in 1983, which is considered as one
of the most important works on the pathology and surgery of the lacrimal system.
He began contributing to the training of Brazilian ophthalmologists in October
1976, after his conference “Propedeutics of the Lacrimal System” held in the 3rd
Meeting of the Center of Studies on Oculoplastic in Rio de Janeiro. Several
experts, including Eduardo Soares in 1977 and Marilisa Nano Costa in 1980, have
had the privilege of doing internships under his guidance and to be at his
service at the Hospital de Niños and at the Centro Privado de Ojos in
Buenos Aires (Argentina). All are grateful for the teachings transmitted in
Weil’s courses, lectures, and conferences in our country. JOHN CLARK MUSTARDÉ Mustardé began practicing medicine as an ophthalmologist. However, during
World War II, he worked alongside the great masters in the field of general
plastic surgery, having also acquired the title of expert in this area. He is
admittedly one of the most important surgeons of the twentieth century in
medical practice, having been a pioneer in various procedures. His important
contributions to ophthalmic plastic surgery, especially his techniques of eyelid
reconstruction, must be particularly emphasized. In the teaching field, Professor Mustardé taught with great prominence,
spreading disciples to several countries, such as in Canniesburn Hospital in
Glasgow, Scotland. His book, “Repair and Reconstruction in the Orbital Reegion,”
1966, is a real bible for ocular plastic surgeons. Moreover, he taught the
exercise of Hippocratic and humanitarian medicine, seeking perfection with
humility and perseverance. In the associative area, he was the founder of several entities such as the
European Society of Ocular Plastic Surgery. Several honors and laurels were
granted to him, including the title of “Sir” by the Queen of England. He was present at the foundation of the Center of Studies of Oculoplastic
Surgery, which happened during a reunion of the Brazilian Ophthalmological
Society in Rio de Janeiro, November 27, 1974. He was given the title of Honorary
President. The author thanks his contributions and teachings on behalf of
Brazilian Ophthalmology. HILTON ROCHA Hilton Rocha was the Chairman of the Department of Ophthalmology, Faculty of
Medicine, at the Federal University of Minas Gerais (Hospital São
Geraldo) and was responsible for the creation (in 1959) of the first
specialization course in ophthalmology in Brazil. Until then, ophthalmology was
not divided into sectors but was taught by teachers according to their personal
and professional experiences without a systematic program of teaching. He
implemented various sectors of specialty, initially contemplating glaucoma,
strabismus, retina, uveitis, contact lenses, and patholoegy. The first group of
ophthalmologists graduated in 1961. Since then, this model of education began to
be adopted by other institutions throughout Brazil. In 1966, foreseeing the
future of the specialty, Professor Hilton Rocha created the Sector of Ocular
Plastic Surgery, a pioneer in Brazil, and placed the young Eduardo Jorge
Carneiro Soares in charge . That
was how ocular plastic surgery began to be taught and added to the training of
Brazilian’s ophthalmologists.
Byron Smith was the Director of the Department of Ophthalmology at New York
Medical College and Emeritus Chief Surgeon of the Division of Ophthalmic Plastic
Surgery from Manhattan Eye, Ear and Throat Hospital. He was one of Hughes’
disciples during World War II. Together, they founded the first clinic entirely
devoted to oculoplastic surgery at New York University in 1941. His studies on
the mechanism of orbital fractures helped systematize the treatment of orbital
trauma. Dr. Smith answered to several calls of Brazilian ophthalmology to
minister courses and lectures at meetings and was always forthcoming to all who
wanted his internships and teachings.
Bernard Weil was one of the pillars of dacryology. Alongside Benjamin Milder, he
published the book “The Lacrimal Syeestem” in 1983, which is considered as one
of the most important works on the pathology and surgery of the lacrimal system.
He began contributing to the training of Brazilian ophthalmologists in October
1976, after his conference “Propedeutics of the Lacrimal System” held in the 3rd
Meeting of the Center of Studies on Oculoplastic in Rio de Janeiro. Several
experts, including Eduardo Soares in 1977 and Marilisa Nano Costa in 1980, have
had the privilege of doing internships under his guidance and to be at his
service at the Hospital de Niños and at the Centro Privado de Ojos in
Buenos Aires (Argentina). All are grateful for the teachings transmitted in
Weil’s courses, lectures, and conferences in our country.
Mustardé began practicing medicine as an ophthalmologist. However, during
World War II, he worked alongside the great masters in the field of general
plastic surgery, having also acquired the title of expert in this area. He is
admittedly one of the most important surgeons of the twentieth century in
medical practice, having been a pioneer in various procedures. His important
contributions to ophthalmic plastic surgery, especially his techniques of eyelid
reconstruction, must be particularly emphasized. In the teaching field, Professor Mustardé taught with great prominence,
spreading disciples to several countries, such as in Canniesburn Hospital in
Glasgow, Scotland. His book, “Repair and Reconstruction in the Orbital Reegion,”
1966, is a real bible for ocular plastic surgeons. Moreover, he taught the
exercise of Hippocratic and humanitarian medicine, seeking perfection with
humility and perseverance. In the associative area, he was the founder of several entities such as the
European Society of Ocular Plastic Surgery. Several honors and laurels were
granted to him, including the title of “Sir” by the Queen of England. He was present at the foundation of the Center of Studies of Oculoplastic
Surgery, which happened during a reunion of the Brazilian Ophthalmological
Society in Rio de Janeiro, November 27, 1974. He was given the title of Honorary
President. The author thanks his contributions and teachings on behalf of
Brazilian Ophthalmology.
Hilton Rocha was the Chairman of the Department of Ophthalmology, Faculty of
Medicine, at the Federal University of Minas Gerais (Hospital São
Geraldo) and was responsible for the creation (in 1959) of the first
specialization course in ophthalmology in Brazil. Until then, ophthalmology was
not divided into sectors but was taught by teachers according to their personal
and professional experiences without a systematic program of teaching. He
implemented various sectors of specialty, initially contemplating glaucoma,
strabismus, retina, uveitis, contact lenses, and patholoegy. The first group of
ophthalmologists graduated in 1961. Since then, this model of education began to
be adopted by other institutions throughout Brazil. In 1966, foreseeing the
future of the specialty, Professor Hilton Rocha created the Sector of Ocular
Plastic Surgery, a pioneer in Brazil, and placed the young Eduardo Jorge
Carneiro Soares in charge . That
was how ocular plastic surgery began to be taught and added to the training of
Brazilian’s ophthalmologists.
The newly created Sector of Ocular Plastic Surgery began its activities in 1966,
working on the 3rd floor of Hospital São Geraldo in a very small room. It had
a spotlight along with a file for medical records and slides. Photodocumentation was
made using an Asai-Pentax camera equipped with a macro lens and ring flash. There
was only a simple wooden chair where the patient sat, another one for the examiner,
and a small table. The resident stood around. During this period, in addition to patient care, the author taught the theoretical
program and attended outpatients and surgical patients from the residents of the
Specialization Course in Ophthalmology, which was conducted in the second year of
the course in a 2-month rotation. The seventh group of residents was the first to
receive teaching in oculoplastic surgery . The resident completed the internship with a basic understanding of the specialty.
The subject of oculoplastic surgery was then established in the Ophthalmologist
Training Course and served as an example to other university residences in Brazil to
incorporate it in their teaching programs. Furthermore, the Sector produced scientific publications, courses, lectures, and
presentations at congresses and meetings in Brazil and abroad, thus promoting the
specialty. In 1971, after his stay in Canniesburn Hospital in Glasgow (Scotland),
where Eduardo Soares worked with Professor John C. Mustardé, oculoplastics
specialization was created in the Fellowship system, with exclusive dedication for a
year. The first Fellow was Alfredo Bonfioli (04/01/74 to 03/31/75). Hence, this
course was born, a pioneer in Brazil, and it has produced oculoplastic surgeons
every year. It was recognized in 1988 by the Brazilian Council of Ophthalmology with
the name of Extension Course in Oculoplastic Surgery . Evaldo Santos, Eloy Pereira, and Eduardo Soares were the three doctors responsible
for creating a Center of Study to teach and promote ocular plastic surgery in
Brazil. They began to discuss the issue after presenting lectures in the same
scientific session of the Brazilian Congress of Ophthalmology in Porto Alegre (RS)
in 1969. Back then, there was only the Brazilian Center for Strabismus. AND SO OUR SOCIETY WAS BORN (1974) A significant incentive for the trio was given by Jack Mustardé in October
1971 when he visited Brazil invited by the Brazilian Portuguese-Spanish Congress
in Rio de Janeiro (RJ). The statutes were prepared and, on November 21, 1974,
they founded the society with the name “CENTRO DE ESTUDOS DE PLÁSTICA
OCULAR” - CEPO (Study Center of Oculoplastics), at the head office of the
Brazilian Society of Ophthalmology in RJ . The statutes contemplated the aim of bringing together
ophthalmologists interested in the field to share knowledge and develop ocular
plastic surgery in Brazil. Professor John Clark Mustardé, present at the
meeting, was awarded unanimously the title of Honorary President. Eduardo Jorge
Carneiro Soares was awarded the President of the first Board of Directors for a
2-year term .
A significant incentive for the trio was given by Jack Mustardé in October
1971 when he visited Brazil invited by the Brazilian Portuguese-Spanish Congress
in Rio de Janeiro (RJ). The statutes were prepared and, on November 21, 1974,
they founded the society with the name “CENTRO DE ESTUDOS DE PLÁSTICA
OCULAR” - CEPO (Study Center of Oculoplastics), at the head office of the
Brazilian Society of Ophthalmology in RJ . The statutes contemplated the aim of bringing together
ophthalmologists interested in the field to share knowledge and develop ocular
plastic surgery in Brazil. Professor John Clark Mustardé, present at the
meeting, was awarded unanimously the title of Honorary President. Eduardo Jorge
Carneiro Soares was awarded the President of the first Board of Directors for a
2-year term .
EVALDO MACHADO DOS SANTOS (1916-1999) From Jaquarão (RS/Brazil), Evaldo Machado dos Santos graduated from the
Federal University of Rio Grande do Sul in 1941. In ophthalmology, he was a
student of Professor Ivo Correia Meyer. He was a specialist in strabismus and
also dedicated himself to oculoplastics and honed his skills under the guidance
of Professor Byron Smith in 1951 in New York (USA). He exercised his activities
at the Red Cross Hospital in RJ. He created the Ophthalmology Service of the Air
Force Hospital of RJ. His lectures were preferably about ptosis and marginal
deformities, areas in which he had great experience. SEBASTIÃO ELOY PEREIRA (1936-2017) He was born in Taubaté (SP), on the day of Saint Sebastian, January 20,
1936, a fact that explains the origin of his name. He graduated in medicine at
the University of Medical Sciences, RJ, and did his residence at the Department
of Ophthalmology under the guidance of Professor Werther Duque Estrada. In
oculoplastics, he was a student of John Clark Mustardé in Ballochmyle
Hospital, Scotland, in 1966. He returned to work under the service of Professor
Werther Duque Estrada in RJ. In November 1967, he moved to Campo Grande
(MS/Brazil), Mato Grosso do Sul (UFMS), to head the Department of Ophthalmology
at the Federal University of Mato Grosso do Sul (UFMS). He excelled in eyelid
reconstruction techniques, which were taught by Jack Mustardé. He was a
very skilled surgeon in reconstructive plastic surgery of the eyelids and
orbital region. He suspended these activities in 2013 due to health reasons. The
theme earned him courses, lectures, and publications. EDUARDO JORGE CARNEIRO SOARES Born in Belém (PA/Brazil) on October 5, 1938, Eduardo Jorge Carneiro
Soares graduated at the Medical School of the Federal University of Pará
in 1962. After graduation, he joined the fifth class of the Specialization
Course in Ophthalmology at the Medical School of the Federal University of Minas
Gerais (under the service of Professor Hilton Rocha) in São Geraldo
Hospital. Upon receiving the title of specialist in ophthalmology in 1965, he
was invited to the honorable mission of joining the faculty of the course as
head of the newly created Sector of Ocular Plastic Surgery. He raised the banner
“Learning and Teaching” that still flies to this day in his professional
routines. In 1971, he attended the Department of Plastic Surgery of Professor
Jack Mustardé in Canniesburn Hospital in Glasgow (Scotland), where he
honed his skills. Upon returning, he created in São Geraldo Hospital the
first year-long course of Fellowship in ocular plastic surgery with exclusive
dedication. From the beginning of his activities, Dr. Soares dedicated special
attention to mutilated patients by anophthalmic cavities. At that time,
Brazilian surgeons did not use implants to replace the ocular volume, thus
condemning patients to suffer physically and emotionally the hardships of
deformities inherent to empty anophthalmic sockets. There have been several
classes, lectures, and publications to change this situation. This culminated in
the doctoral thesis “The Importance of anatomical and functional Reconstruction
of the anophthalmic cavity in the Prevention and Treatment of the retraction
process of the conjunctival fornices,” approved at UFMG on August 28, 1992. The
victory of this struggle was happily achieved. In Brazil, it is currently rare
patients who fail to have their orbits recovered in the same act of enuclea tion
or evisceration. Initially with only the three founders and along with a few faithful companions, CEPO was not an
improvised work and neither emerged as an ephemeral boost of its founders’
aspirations. Among those companions from the first days who devoted themselves
to the institution and contributed to its progress ( and ), some
of the following people deserve special attention: Armando Arede Cássio Galvão Monteiro Eurípedes Mota de Moura Henrique Kikuta Jaime Roizenblatt Janduhy Perino Filho Jorge Alberto de Oliveira José Aparecido Deboni José Daphnis Mil Homens Costa José Vital Filho Luiz Augusto Morizot Leite Filho Marcos Cunha Marilisa Nano Costa Mário Luiz Monteiro Mário Perez Genovesi Mauro Rabinovith Paulo Goes Manso Roberto Abuchan Roberto Caldato Vicente Muniz de Carvalho Waldyr Martins Portellinha Zeniro José SanMartin 1979 - CHANGING THE NAME CEPO TO SBCPO In 1979, a change in the legal situation of CEPO was a necessity to allow its
integration along with the other affiliated societies to the Brazilian Council
of Ophthalmology. The CENTRO DE ESTUDOS DE PLÁSTICA OCULAR (CEPO) became
the SOCIEDADE BRASILEIRA DE CIRURGIA PLÁSTICA OCULAR (SBCPO) on September
8, 1979, in the General Assembly of CEPO held in São Paulo (SP) during
the XX Brazilian Congress of Ophthalmology. It was granted to the board of directors chaired by Eurípides da Mota
Mourae, and with Waldir Martins Portellinha as the secretary, the Study Center
was transformed into a legally constituted entity named the Brazilian Society of
Ocular Plastic Surgery (SBCPO) .
The statutes of the Association were registered under No. 16,727, in the 3rd
Civil Registry of Legal Entities of São Paulo (SP). The legal regularization of CEPO, transforming its identity into a new
institution, allowed the Society to request its integration along with the other
affiliated societies to the Conselho Brasileiro de
Oftalmologia. This was formalized on October 22, 1981, under the
management of the second term of President Eduardo J. C Soares. Thus,
oculoplastic surgery was recognized by the Conselho Brasileiro de
Oftalmologia as a subspecialty of ophthalmology. 1997 - THE BOOK On September 3, 1997, the book “Oculoplastic Surgery” - Roca Publishing,
São Paulo/SP - was launched in Goiânia (GO) in the presence of
Professors J.C. Mustardé and Richard Collin. This was the main theme of
the XXIX Brazilian Congress of Ophthalmology. The editors were Professors
Eduardo J.C. Soares, Eurípides M. Moura, and João Orlando R.
Gonçalves. The book was the conclusion of the work done since 1974, when
the Study Center of Oculoplastics (CEPO) was founded. The knowledge acquired by
collaborators through lectures, courses, symposia, and publications made during
all these years were expressed and disseminated in that book. It brought
together the experience and lessons of all those who participated in the
activities of the Brazilian Society of Oculoplastic Surgery. Being sold-out, the
book has been very useful not only for oculoplastic surgeons but also for
Brazilian ophthalmologists. THE CURRENT MOMENT Currently, the Brazilian Society of Oculoplastic Surgery (SBCPO) occupies a
prominent place among its peers, bringing 369 members associated with annuities.
There are 17 Services currently available in Brazil dedicated to Fellowships,
with 12 of them being in the southeast region. Several young oculoplastic
surgeons excel and dominate the national scene. The scientific level of the
society congresses and meetings raises to international standards. The Brazilian
Society was recognized as a partner by the American Society in an agreement
signed in June 2013 and by the European Society in October 2017. It is
interesting to note that the Society has been held together and united in these
46 years. An analysis of the conference programs, courses, symposia, and
congresses organized by boards has shown that the Eyelids, Lacrimal System,
Orbit, and now Esthetics have been maintained as sisters and brothers from the
same family. This union promotes the progress and strength of the Society on the
national scene and, above all, promotes the power to defend its interests,
particularly regarding fairer fees . The moment is observed with excitement and satisfaction to what has been
conquered by the generations that succeeded in the command of the Society. RECOGNITION TO PRESIDENTS AND THEIR DIRECTORS 1975-1977 - Eduardo Jorge Carneiro Soares (MG) 1977-1979 - Evaldo Machado dos Santos (RJ) 1979-1981 - Eurípedes da Mota Moura (SP) 1981-1983 - Eduardo Jorge Carneiro Soares (MG) 1983-1985 - Sebastião Eloy Pereira (MS) 1985-1987 - Waldir Martins Portellinha (SP) 1987-1989 - Vicente Muniz de Carvalho (GO) 1989-1991 - Valênio Perez França (MG) 1991-1993 - Marilisa Nano Costa (SP) 1993-1995 - Waldir Martins Portellinha (SP) 1995-1997 - Roberto Caldato (SP) 1997-1999 - Hélcio Fortuna Bessa (RJ) 1999-2001 - Ana Rosa Pimentel (MG) 2001-2003 - Antônio Augusto Velasco e Cruz (SP) 2003-2005 - Ana Estela B. P. Santana( SP) 2005-2007 - Raquel Dantas (MG) 2007-2009 - Silvana Artioli Schellini. (SP) 2009-2011 - Suzana Matayoshi (SP) 2011-2013 - Ricardo Morchbacker (RS) 2013-2015 - Guilherme Herzog (RJ) 2015-2017 - Murilo Alves Rodrigues (MG) 2017-2019 - Roberto Murillo Limongi (GO) 2019-2021 - Patrícia Akaishi (SP) THE FUTURE Hilton Rocha used to say “…a way to build is this one of recalling.” A man at birth begins a relentless countdown. Every day, every month, and every
year that passes are debts in the general accounts of his existence. The
opposite happens with our Society. Every year is another in its history,
incorporated to its heritage and contributing to consolidating it as a bank of
knowledge and experience to be used for encouraging the growth of future
generations. Such is life and the rotation is its law. Hilton Rocha would say,
“We are all knelt down before a work that does not wither; on the contrary it
germinates, grows, blooms.”
From Jaquarão (RS/Brazil), Evaldo Machado dos Santos graduated from the
Federal University of Rio Grande do Sul in 1941. In ophthalmology, he was a
student of Professor Ivo Correia Meyer. He was a specialist in strabismus and
also dedicated himself to oculoplastics and honed his skills under the guidance
of Professor Byron Smith in 1951 in New York (USA). He exercised his activities
at the Red Cross Hospital in RJ. He created the Ophthalmology Service of the Air
Force Hospital of RJ. His lectures were preferably about ptosis and marginal
deformities, areas in which he had great experience.
He was born in Taubaté (SP), on the day of Saint Sebastian, January 20,
1936, a fact that explains the origin of his name. He graduated in medicine at
the University of Medical Sciences, RJ, and did his residence at the Department
of Ophthalmology under the guidance of Professor Werther Duque Estrada. In
oculoplastics, he was a student of John Clark Mustardé in Ballochmyle
Hospital, Scotland, in 1966. He returned to work under the service of Professor
Werther Duque Estrada in RJ. In November 1967, he moved to Campo Grande
(MS/Brazil), Mato Grosso do Sul (UFMS), to head the Department of Ophthalmology
at the Federal University of Mato Grosso do Sul (UFMS). He excelled in eyelid
reconstruction techniques, which were taught by Jack Mustardé. He was a
very skilled surgeon in reconstructive plastic surgery of the eyelids and
orbital region. He suspended these activities in 2013 due to health reasons. The
theme earned him courses, lectures, and publications.
Born in Belém (PA/Brazil) on October 5, 1938, Eduardo Jorge Carneiro
Soares graduated at the Medical School of the Federal University of Pará
in 1962. After graduation, he joined the fifth class of the Specialization
Course in Ophthalmology at the Medical School of the Federal University of Minas
Gerais (under the service of Professor Hilton Rocha) in São Geraldo
Hospital. Upon receiving the title of specialist in ophthalmology in 1965, he
was invited to the honorable mission of joining the faculty of the course as
head of the newly created Sector of Ocular Plastic Surgery. He raised the banner
“Learning and Teaching” that still flies to this day in his professional
routines. In 1971, he attended the Department of Plastic Surgery of Professor
Jack Mustardé in Canniesburn Hospital in Glasgow (Scotland), where he
honed his skills. Upon returning, he created in São Geraldo Hospital the
first year-long course of Fellowship in ocular plastic surgery with exclusive
dedication. From the beginning of his activities, Dr. Soares dedicated special
attention to mutilated patients by anophthalmic cavities. At that time,
Brazilian surgeons did not use implants to replace the ocular volume, thus
condemning patients to suffer physically and emotionally the hardships of
deformities inherent to empty anophthalmic sockets. There have been several
classes, lectures, and publications to change this situation. This culminated in
the doctoral thesis “The Importance of anatomical and functional Reconstruction
of the anophthalmic cavity in the Prevention and Treatment of the retraction
process of the conjunctival fornices,” approved at UFMG on August 28, 1992. The
victory of this struggle was happily achieved. In Brazil, it is currently rare
patients who fail to have their orbits recovered in the same act of enuclea tion
or evisceration. Initially with only the three founders and along with a few faithful companions, CEPO was not an
improvised work and neither emerged as an ephemeral boost of its founders’
aspirations. Among those companions from the first days who devoted themselves
to the institution and contributed to its progress ( and ), some
of the following people deserve special attention: Armando Arede Cássio Galvão Monteiro Eurípedes Mota de Moura Henrique Kikuta Jaime Roizenblatt Janduhy Perino Filho Jorge Alberto de Oliveira José Aparecido Deboni José Daphnis Mil Homens Costa José Vital Filho Luiz Augusto Morizot Leite Filho Marcos Cunha Marilisa Nano Costa Mário Luiz Monteiro Mário Perez Genovesi Mauro Rabinovith Paulo Goes Manso Roberto Abuchan Roberto Caldato Vicente Muniz de Carvalho Waldyr Martins Portellinha Zeniro José SanMartin
In 1979, a change in the legal situation of CEPO was a necessity to allow its
integration along with the other affiliated societies to the Brazilian Council
of Ophthalmology. The CENTRO DE ESTUDOS DE PLÁSTICA OCULAR (CEPO) became
the SOCIEDADE BRASILEIRA DE CIRURGIA PLÁSTICA OCULAR (SBCPO) on September
8, 1979, in the General Assembly of CEPO held in São Paulo (SP) during
the XX Brazilian Congress of Ophthalmology. It was granted to the board of directors chaired by Eurípides da Mota
Mourae, and with Waldir Martins Portellinha as the secretary, the Study Center
was transformed into a legally constituted entity named the Brazilian Society of
Ocular Plastic Surgery (SBCPO) .
The statutes of the Association were registered under No. 16,727, in the 3rd
Civil Registry of Legal Entities of São Paulo (SP). The legal regularization of CEPO, transforming its identity into a new
institution, allowed the Society to request its integration along with the other
affiliated societies to the Conselho Brasileiro de
Oftalmologia. This was formalized on October 22, 1981, under the
management of the second term of President Eduardo J. C Soares. Thus,
oculoplastic surgery was recognized by the Conselho Brasileiro de
Oftalmologia as a subspecialty of ophthalmology.
On September 3, 1997, the book “Oculoplastic Surgery” - Roca Publishing,
São Paulo/SP - was launched in Goiânia (GO) in the presence of
Professors J.C. Mustardé and Richard Collin. This was the main theme of
the XXIX Brazilian Congress of Ophthalmology. The editors were Professors
Eduardo J.C. Soares, Eurípides M. Moura, and João Orlando R.
Gonçalves. The book was the conclusion of the work done since 1974, when
the Study Center of Oculoplastics (CEPO) was founded. The knowledge acquired by
collaborators through lectures, courses, symposia, and publications made during
all these years were expressed and disseminated in that book. It brought
together the experience and lessons of all those who participated in the
activities of the Brazilian Society of Oculoplastic Surgery. Being sold-out, the
book has been very useful not only for oculoplastic surgeons but also for
Brazilian ophthalmologists.
Currently, the Brazilian Society of Oculoplastic Surgery (SBCPO) occupies a
prominent place among its peers, bringing 369 members associated with annuities.
There are 17 Services currently available in Brazil dedicated to Fellowships,
with 12 of them being in the southeast region. Several young oculoplastic
surgeons excel and dominate the national scene. The scientific level of the
society congresses and meetings raises to international standards. The Brazilian
Society was recognized as a partner by the American Society in an agreement
signed in June 2013 and by the European Society in October 2017. It is
interesting to note that the Society has been held together and united in these
46 years. An analysis of the conference programs, courses, symposia, and
congresses organized by boards has shown that the Eyelids, Lacrimal System,
Orbit, and now Esthetics have been maintained as sisters and brothers from the
same family. This union promotes the progress and strength of the Society on the
national scene and, above all, promotes the power to defend its interests,
particularly regarding fairer fees . The moment is observed with excitement and satisfaction to what has been
conquered by the generations that succeeded in the command of the Society.
1975-1977 - Eduardo Jorge Carneiro Soares (MG) 1977-1979 - Evaldo Machado dos Santos (RJ) 1979-1981 - Eurípedes da Mota Moura (SP) 1981-1983 - Eduardo Jorge Carneiro Soares (MG) 1983-1985 - Sebastião Eloy Pereira (MS) 1985-1987 - Waldir Martins Portellinha (SP) 1987-1989 - Vicente Muniz de Carvalho (GO) 1989-1991 - Valênio Perez França (MG) 1991-1993 - Marilisa Nano Costa (SP) 1993-1995 - Waldir Martins Portellinha (SP) 1995-1997 - Roberto Caldato (SP) 1997-1999 - Hélcio Fortuna Bessa (RJ) 1999-2001 - Ana Rosa Pimentel (MG) 2001-2003 - Antônio Augusto Velasco e Cruz (SP) 2003-2005 - Ana Estela B. P. Santana( SP) 2005-2007 - Raquel Dantas (MG) 2007-2009 - Silvana Artioli Schellini. (SP) 2009-2011 - Suzana Matayoshi (SP) 2011-2013 - Ricardo Morchbacker (RS) 2013-2015 - Guilherme Herzog (RJ) 2015-2017 - Murilo Alves Rodrigues (MG) 2017-2019 - Roberto Murillo Limongi (GO) 2019-2021 - Patrícia Akaishi (SP)
Hilton Rocha used to say “…a way to build is this one of recalling.” A man at birth begins a relentless countdown. Every day, every month, and every
year that passes are debts in the general accounts of his existence. The
opposite happens with our Society. Every year is another in its history,
incorporated to its heritage and contributing to consolidating it as a bank of
knowledge and experience to be used for encouraging the growth of future
generations. Such is life and the rotation is its law. Hilton Rocha would say,
“We are all knelt down before a work that does not wither; on the contrary it
germinates, grows, blooms.”
|
Cardiovascular Risk Biomarkers in Women with and Without Polycystic Ovary Syndrome | 99feb895-185a-4099-8520-3a1e88888a4c | 11763313 | Biochemistry[mh] | Polycystic ovary syndrome (PCOS) is the most prevalent metabolic disorder in reproductive women, affecting 5–10% of women . Despite the establishment of international criteria for diagnosing PCOS, approximately 70% of women with the syndrome remain undiagnosed . As a metabolic disorder, PCOS is associated with a higher prevalence of comorbidities such as hypertension, dyslipidemia, type 2 diabetes, and increased cardiovascular risk, underscoring its clinical importance . The etiology of PCOS is multifactorial, involving a complex interplay of genetic, environmental, and lifestyle factors that contribute to its pathogenesis. The Rotterdam diagnostic criteria require two of the three features, namely biochemical or clinical hyperandrogenism, irregular periods of 10 or less per year (ovulatory dysfunction), and polycystic ovarian morphology on ultrasound. Over 30% of women with PCOS have impaired glucose regulation and up to 10% develop diabetes . Mechanistically, PCOS affects reproductive, cardiovascular, and metabolic systems. Key factors such as hyperandrogenemia, chronic inflammation, oxidative stress, and insulin resistance (IR) are present in PCOS and play critical roles in the dysregulation of several cellular biomarkers such as heat shock proteins, complement proteins, and coagulation markers, largely driven by underlying obesity and IR. These mechanisms contribute to the development of systemic complications and further emphasize the need for early diagnosis and comprehensive management of PCOS. Obesity is reported in 50% of women with PCOS and significantly impacts PCOS phenotypes and fertility outcomes . The interplay between obesity, IR, and PCOS creates a vicious cycle that complicates metabolic and reproductive health in affected individuals. Both obesity and PCOS are linked to various diseases of the cardiovascular system such cardiovascular events , stroke, hypertension, and venous thromboembolism . PCOS is most strongly associated with IR, which is the major underlying factor in the development of various cardiometabolic diseases such as dyslipidemia and hypertension. High circulatory levels of C-reactive protein, a marker of inflammation, as well as increased thickness and calcification of coronary arteries were associated with IR, obesity, and PCOS as subclinical diagnostic markers for cardiovascular diseases (CVDs). Although the underlying pathophysiology of PCOS causes an increase in CVD risk , the association of PCOS with subclinical markers of CVDs has not been well explored Understanding the etiology and systemic effects of obesity and PCOS is crucial for developing therapeutic strategies to prevent CVDs in women. In a recent study of nonobese PCOS women, nine cardiovascular risk proteins (CVRPs) were upregulated compared to women without PCOS . In this study, we compare CVRP expression in women with and without PCOS, irrespective of BMI or insulin resistance, to identify PCOS-specific differences. Among these, we hypothesized that overweight/obese women with PCOS are at a higher risk of CVDs potentially reflected in a comparable or more pronounced CVRP expression profile compared to their non-PCOS counterparts.
2.1. Study Design In this exploratory cross-sectional analysis, plasma levels of CVRPs were measured in Caucasian women with PCOS (n = 147) and non-PCOS (n = 97) recruited from the Hull endocrine clinic . Non-PCOS women, who were recruited by advert, were age matched to the PCOS patients, and all were recruited from the same geographic region and with lower socioeconomic status. For the diagnosis of PCOS, the following Rotterdam consensus criteria were used: (1) clinical (Ferriman–Gallwey score of >8) and biochemical hyperandrogenemia (a free androgen index (FAI) of >4); (2) oligomenorrhea or amenorrhea; and (3) polycystic ovaries seen on transvaginal ultrasound . Study participants had no other condition or illness and were required to be medication-free for nine months preceding study enrolment, including the exclusion of over-the-counter medication. Testing was undertaken to ensure that no patient had any of the following endocrine conditions: non-classical 21-hydroxylase deficiency, hyperprolactinemia, Cushing’s disease, or an androgen-secreting tumor as per the recommendations . Demographic data for both non-PCOS and PCOS women is shown in . The study was conducted in accordance with the Declaration of Helsinki and approved by the Newcastle and North Tyneside Ethics Committee (reference number 10/H0906/17 and date of approval of 6 June 2014). Patients presented after fasting overnight; height, weight, waist circumference, and body mass index (BMI) were recorded according to the World Health Organization (WHO) guidelines . BMI was defined as weight in kilograms and height in centimeters, with the formula kg/m 2 . The participants with a BMI ranging from 26 to 29.9 kg/m 2 were considered overweight and a BMI ≥ 30 kg/m 2 were considered obese. Blood was withdrawn during fasting and the plasma was prepared by centrifugation at 3500· g for 15 min, aliquoted, and stored at −80 °C. An analysis for sex hormone binding globulin (SHBG), insulin (DPC Immulite 200 analyser, Euro/DPC, Llanberis UK), and plasma glucose (to calculate homeostasis model assessment–insulin resistance (HOMA-IR)) (Synchron LX20 analyser, Beckman-Coulter, High Wycombe, UK) was undertaken. Free androgen index (FAI) was derived from total testosterone divided by SHBG x100. Insulin resistance (IR) was determined by HOMA-IR (insulin × glucose)/22.5). Serum testosterone was quantified using isotope-dilution liquid chromatography tandem mass spectrometry (LC-MS/MS) (Thermo Fisher Scientific, Waltham, MA, USA) . Given that the whole data collected included a mixed population with varying BMIs and IR levels, we conducted subset analyses using BMI-matched and combined BMI- and IR-matched data extracted from the complete dataset. Plasma CVRPs were measured by the slow off-rate modified aptamer (SOMA) scan platform . Calibration was based on the standards previously described . The slow off-rate modified aptamer (SOMAmer)-based protein array was utilized for protein quantification, following the previously outlined procedure . Briefly, the following steps were performed with EDTA plasma samples: (1) the equilibration of SOMAmers for the binding of analyte and primer beads involved coupling the biotin moiety to a fully synthetic fluorophore-labeled SOMAmer through a photocleavable linker; (2) immobilization of the analyte/SOMAmers complex was carried out on streptavidin-substituted support; (3) using long-wave ultraviolet light, the analyte-SOMAmer complexes were cleaved and released into the solution; (4) analyte-SOMAmer complexes were immobilized on streptavidin support through analyte-borne biotinylation; (5) the elution of analyte-SOMAmer complexes was carried out, utilizing the released SOMAmers as surrogates for analyte quantification; and (6) quantification was performed through hybridization to SOMAmer complementary oligonucleotides. Normalization of raw intensities, hybridization, median signal, and calibration signal were standardized for each sample . The SomaScan assay data standardization process involves several key steps to ensure data quality and comparability. First, hybridization normalization adjusts for well-to-well variations using hybridization control sequences. Next, intra-plate signal normalization is applied to calibrator and buffer replicates to correct for plate-specific biases. The process then includes plate scale standardization and calibration using a global calibrator reference to minimize between-plate variability. Quality control is performed by normalizing QC replicate signals against a global reference and checking the median QC replicate values against a global QC standard. Finally, individual sample signals are normalized using a global signal normalization reference to ensure consistency across all measurements. The average coefficient of variation (CV) is 6.1% . Version 3.1 of the SOMAscan assay was used, targeting the 54 CVRPs, which are listed in . 2.2. Data Analysis, Functional Enrichment, and Protein–Protein Interaction Network Analysis SOMAscan proteomic data were quantile normalized and log-transformed for further statistical assessments. We used the linear models for microarray analysis (limma) for two class comparisons for detecting the CVRPs that were significantly regulated in the PCOS cohort. Any CVRP with a fold change of 1 and raw p -value < 0.05 was considered significant . Supervised learning methods using univariate and multivariate stepwise logistic regression were performed to model the association of CVRPs with PCOS in these obese subjects. The significant CVRPs in the regression analysis were further assessed for their diagnostic accuracy by computing the Youden Index (YI) and then using the ROC (receiver operating curve) method. All tests were two tailed and p < 0.05 was considered significant. The statistical analysis was performed using R Bioconductor packages (RStudio 2023.06.2 Bioconductor(BiocManagerv1.30.25) and SPSS v 26.0. The differentially expressed gene (DEG) list in PCOS participants was subjected to gene ontology (GO) analysis using the Database for Annotation, Visualization, and Integrated Discovery (DAVID) [ https://david.ncifcrf.gov/ , accessed on 18 August 2024]. Pathway enrichment using the KEGG database was also performed on the DAVID tool. FDR correction using the Benjamini–Hochberg technique was applied and an enriched term with an adjusted p value < 0.05 was considered significant. As a part of further downstream analysis, the PCOS dysregulated CVRPs were submitted to STRING 12.0 database ( https://string-db.org/ ; accessed on 10 October 2024) analysis for assessing the protein–protein interaction network (PPIN).
In this exploratory cross-sectional analysis, plasma levels of CVRPs were measured in Caucasian women with PCOS (n = 147) and non-PCOS (n = 97) recruited from the Hull endocrine clinic . Non-PCOS women, who were recruited by advert, were age matched to the PCOS patients, and all were recruited from the same geographic region and with lower socioeconomic status. For the diagnosis of PCOS, the following Rotterdam consensus criteria were used: (1) clinical (Ferriman–Gallwey score of >8) and biochemical hyperandrogenemia (a free androgen index (FAI) of >4); (2) oligomenorrhea or amenorrhea; and (3) polycystic ovaries seen on transvaginal ultrasound . Study participants had no other condition or illness and were required to be medication-free for nine months preceding study enrolment, including the exclusion of over-the-counter medication. Testing was undertaken to ensure that no patient had any of the following endocrine conditions: non-classical 21-hydroxylase deficiency, hyperprolactinemia, Cushing’s disease, or an androgen-secreting tumor as per the recommendations . Demographic data for both non-PCOS and PCOS women is shown in . The study was conducted in accordance with the Declaration of Helsinki and approved by the Newcastle and North Tyneside Ethics Committee (reference number 10/H0906/17 and date of approval of 6 June 2014). Patients presented after fasting overnight; height, weight, waist circumference, and body mass index (BMI) were recorded according to the World Health Organization (WHO) guidelines . BMI was defined as weight in kilograms and height in centimeters, with the formula kg/m 2 . The participants with a BMI ranging from 26 to 29.9 kg/m 2 were considered overweight and a BMI ≥ 30 kg/m 2 were considered obese. Blood was withdrawn during fasting and the plasma was prepared by centrifugation at 3500· g for 15 min, aliquoted, and stored at −80 °C. An analysis for sex hormone binding globulin (SHBG), insulin (DPC Immulite 200 analyser, Euro/DPC, Llanberis UK), and plasma glucose (to calculate homeostasis model assessment–insulin resistance (HOMA-IR)) (Synchron LX20 analyser, Beckman-Coulter, High Wycombe, UK) was undertaken. Free androgen index (FAI) was derived from total testosterone divided by SHBG x100. Insulin resistance (IR) was determined by HOMA-IR (insulin × glucose)/22.5). Serum testosterone was quantified using isotope-dilution liquid chromatography tandem mass spectrometry (LC-MS/MS) (Thermo Fisher Scientific, Waltham, MA, USA) . Given that the whole data collected included a mixed population with varying BMIs and IR levels, we conducted subset analyses using BMI-matched and combined BMI- and IR-matched data extracted from the complete dataset. Plasma CVRPs were measured by the slow off-rate modified aptamer (SOMA) scan platform . Calibration was based on the standards previously described . The slow off-rate modified aptamer (SOMAmer)-based protein array was utilized for protein quantification, following the previously outlined procedure . Briefly, the following steps were performed with EDTA plasma samples: (1) the equilibration of SOMAmers for the binding of analyte and primer beads involved coupling the biotin moiety to a fully synthetic fluorophore-labeled SOMAmer through a photocleavable linker; (2) immobilization of the analyte/SOMAmers complex was carried out on streptavidin-substituted support; (3) using long-wave ultraviolet light, the analyte-SOMAmer complexes were cleaved and released into the solution; (4) analyte-SOMAmer complexes were immobilized on streptavidin support through analyte-borne biotinylation; (5) the elution of analyte-SOMAmer complexes was carried out, utilizing the released SOMAmers as surrogates for analyte quantification; and (6) quantification was performed through hybridization to SOMAmer complementary oligonucleotides. Normalization of raw intensities, hybridization, median signal, and calibration signal were standardized for each sample . The SomaScan assay data standardization process involves several key steps to ensure data quality and comparability. First, hybridization normalization adjusts for well-to-well variations using hybridization control sequences. Next, intra-plate signal normalization is applied to calibrator and buffer replicates to correct for plate-specific biases. The process then includes plate scale standardization and calibration using a global calibrator reference to minimize between-plate variability. Quality control is performed by normalizing QC replicate signals against a global reference and checking the median QC replicate values against a global QC standard. Finally, individual sample signals are normalized using a global signal normalization reference to ensure consistency across all measurements. The average coefficient of variation (CV) is 6.1% . Version 3.1 of the SOMAscan assay was used, targeting the 54 CVRPs, which are listed in .
SOMAscan proteomic data were quantile normalized and log-transformed for further statistical assessments. We used the linear models for microarray analysis (limma) for two class comparisons for detecting the CVRPs that were significantly regulated in the PCOS cohort. Any CVRP with a fold change of 1 and raw p -value < 0.05 was considered significant . Supervised learning methods using univariate and multivariate stepwise logistic regression were performed to model the association of CVRPs with PCOS in these obese subjects. The significant CVRPs in the regression analysis were further assessed for their diagnostic accuracy by computing the Youden Index (YI) and then using the ROC (receiver operating curve) method. All tests were two tailed and p < 0.05 was considered significant. The statistical analysis was performed using R Bioconductor packages (RStudio 2023.06.2 Bioconductor(BiocManagerv1.30.25) and SPSS v 26.0. The differentially expressed gene (DEG) list in PCOS participants was subjected to gene ontology (GO) analysis using the Database for Annotation, Visualization, and Integrated Discovery (DAVID) [ https://david.ncifcrf.gov/ , accessed on 18 August 2024]. Pathway enrichment using the KEGG database was also performed on the DAVID tool. FDR correction using the Benjamini–Hochberg technique was applied and an enriched term with an adjusted p value < 0.05 was considered significant. As a part of further downstream analysis, the PCOS dysregulated CVRPs were submitted to STRING 12.0 database ( https://string-db.org/ ; accessed on 10 October 2024) analysis for assessing the protein–protein interaction network (PPIN).
3.1. Clinical Demographics The baseline demographic data of the whole set of 97 non-PCOS and 147 PCOS participants are presented in . The PCOS subjects had significantly higher BMIs ( p < 0.001) and elevated anti-Mullerian hormone (AMH) ( p < 0.001), testosterone ( p < 0.001) and free androgen index (FAI) ( p < 0.001) levels. In addition, C-reactive protein (CRP) ( p < 0.001), homeostasis model assessment–insulin resistance (HOMA-IR) ( p < 0.05), and fasting blood glucose ( p < 0.01) were elevated whilst PCOS women had a lower SHBG ( p < 0.001). The participants in both the PCOS and control groups were age matched. Subset analysis: BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) and insulin resistance (IR)-(HOMA-IR < 1.9) plus BMI-matched subsets were also subjected to downstream analysis. Data subset analysis indicated that the frequency of overweight/obese patients having PCOS when HOMA-IR ≥ 1.9 is significantly higher compared to overweight/obese women with HOMA-IR < 1.9 (72.7% vs. 27.3%, Chi-square p value = 0.03). A summary of the division of the whole set into subsets is outlined in . 3.2. Whole Set and Subset Analysis A. Whole set: CVRPs that differed between PCOS (n = 147) and non-PCOS (n = 97) women in entire cohort. Eleven of the 54 CVRPs were dysregulated in PCOS compared to non-PCOS women : leptin, interleukin-1 receptor antagonist protein (IL-1Ra), polymeric immunoglobulin receptor (PIGR), interleukin-18 receptor (IL-18Ra), C-C motif chemokine 3 (MIP-1a), and angiopoietin-1 (ANGPT1) were upregulated whilst advanced glycosylation end product-specific receptor, soluble (sRAGE), bone morphogenetic protein 6 (BMP6), growth/differentiation factor 2 (GDF2), superoxide dismutase [Mn] mitochondrial (MnSOD), and SLAM family member 5 (SLAF5) were downregulated relative to the controls ( , A). B. Subset BMI-matched: CVRPs in BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) PCOS (n = 114) and non-PCOS (n = 42) women. Again, 11 of the 54 CVRPs were dysregulated in overweight/obese PCOS compared to controls . Six of these CVRPs were common with the whole set: ANGPT1 and IL-1Ra were upregulated, and sRAGE, BMP6, GDF2, and Mn-SOD were downregulated. In addition, lymphotactin (XCL1) was upregulated and placenta growth factor (PIGF), alpha-L-iduronidase (IDUA), angiopoietin-1 receptor, soluble (sTie-2), and macrophage metalloelastase (MMP12) were downregulated ( , B). C. Subset normal IR- and BMI-matched: CVRPs in BMI-matched (overweight/obese BMI ≥ 26 kg/m 2 ) and normal IR-matched (HOMA-IR < 1.9) PCOS (n = 9) and non-PCOS (n = 6) women. In 2 out of 54 CVRPs, tissue factor (TF) and renin were upregulated in PCOS in this subset ( , C). 3.3. Multivariable Regression Analysis The dysregulated proteins in the whole set were subjected to stepwise multivariable logistic regression to model their association with PCOS. The model had a Nagelkerke R Square of 0.31 and the variables included were BMP-6, IL-1Ra, ANGPT1, sRAGE, and leptin. Higher BMP-6 and sRAGE were noted in the non-PCOS versus the PCOS group and hence a negative regression parameter was associated with PCOS (BMP-6: B = −1.0, p = 0.03 and sRAGE: B = −0.60, p = 0.003). As per the models, the higher odds of having PCOS among women were associated with higher levels of ANGPT1 (OR 1.79, 95% CI: 0.93–3.43; p = 0.07), IL-1Ra (OR 1.64, 95% CI 1.03–2.62, p = 0.03) and leptin (OR 1.84, 95% CI 1.23–2.77, p = 0.003). The dysregulated proteins in the BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) individuals were subjected to multivariable logistic regression to model their association with PCOS. The model had a Nagelkerke R Square of 0.365. The model indicated that participants who were overweight/obese with PCOS were more likely to have higher levels of ANGPT1 (OR 3.85, 95% CI: 1.05–13.35, p < 0.001), programmed cell death 1 ligand 2 (PD-L2) (OR 2.22, 95% CI: 0.78–8.07, p < 0.004), and IL1-Ra (OR 0.98, 95% CI 0.31–8.35, p = 0.004). Negative regression terms were associated with PIGF, sRAGE, and Dickkopf-related protein 1 (DKK1). The IR-matched (HOMA-IR < 1.9) plus BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) data subset failed to develop any supervised learning models. 3.4. ROC Curve Analysis ROC curve analysis was performed with the IR-matched (HOMA-IR < 1.9) plus BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) data subset to identify the CVRPs that could delineate PCOS in this subpopulation. The analysis showed that among the 54 CVRPs, renin was able to distinguish PCOS in this subset. The area under the curve (AUC) for renin was 0.86 (95% CI 0.65–1.078, p = 0.001). According to the ROC curves and Youden’s Index, the optimal cutoff value of renin expression level was 596.3 RFU, with 77.8% sensitivity and 99.9% specificity . 3.5. Protein–Protein Interaction STRING 12.0 (Search Tool for the Retrieval of Interacting Genes) was used to visualize the known and predicted protein–protein interaction for proteins that were upregulated in the following populations: CVRPs in the whole set of participants (PCOS vs. non-PCOS) ( A) and CVRPs in BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) PCOS vs. non-PCOS ( B) ( https://string-db.org/ ; accessed on 10 October 2024) groups. The figures represent interactions between the upregulated CVRPs and their immediate interacting partners. 3.6. Functional Enrichment Analysis for Dysregulated Proteins A comprehensive analysis identified eleven dysregulated cardiovascular risk proteins (CVRPs) in the whole set of participants (PCOS vs. non-PCOS) ( A) and eleven dysregulated CVRPs in the BMI-matched subset (overweight/obese, BMI ≥ 26 kg/m 2 ) PCOS vs. non-PCOS ( B) groups. Further investigation through functional enrichment and gene ontology (GO) analysis using the DAVID tool highlighted an increase in GO terms linked to the regulation of cytokines and inflammatory responses, critical pathways known to be actively dysregulated in PCOS and cardiovascular disease. These findings suggest a potential pivotal role for these proteins and pathways in the shared pathogenesis of PCOS and cardiovascular disease.
The baseline demographic data of the whole set of 97 non-PCOS and 147 PCOS participants are presented in . The PCOS subjects had significantly higher BMIs ( p < 0.001) and elevated anti-Mullerian hormone (AMH) ( p < 0.001), testosterone ( p < 0.001) and free androgen index (FAI) ( p < 0.001) levels. In addition, C-reactive protein (CRP) ( p < 0.001), homeostasis model assessment–insulin resistance (HOMA-IR) ( p < 0.05), and fasting blood glucose ( p < 0.01) were elevated whilst PCOS women had a lower SHBG ( p < 0.001). The participants in both the PCOS and control groups were age matched. Subset analysis: BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) and insulin resistance (IR)-(HOMA-IR < 1.9) plus BMI-matched subsets were also subjected to downstream analysis. Data subset analysis indicated that the frequency of overweight/obese patients having PCOS when HOMA-IR ≥ 1.9 is significantly higher compared to overweight/obese women with HOMA-IR < 1.9 (72.7% vs. 27.3%, Chi-square p value = 0.03). A summary of the division of the whole set into subsets is outlined in .
A. Whole set: CVRPs that differed between PCOS (n = 147) and non-PCOS (n = 97) women in entire cohort. Eleven of the 54 CVRPs were dysregulated in PCOS compared to non-PCOS women : leptin, interleukin-1 receptor antagonist protein (IL-1Ra), polymeric immunoglobulin receptor (PIGR), interleukin-18 receptor (IL-18Ra), C-C motif chemokine 3 (MIP-1a), and angiopoietin-1 (ANGPT1) were upregulated whilst advanced glycosylation end product-specific receptor, soluble (sRAGE), bone morphogenetic protein 6 (BMP6), growth/differentiation factor 2 (GDF2), superoxide dismutase [Mn] mitochondrial (MnSOD), and SLAM family member 5 (SLAF5) were downregulated relative to the controls ( , A). B. Subset BMI-matched: CVRPs in BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) PCOS (n = 114) and non-PCOS (n = 42) women. Again, 11 of the 54 CVRPs were dysregulated in overweight/obese PCOS compared to controls . Six of these CVRPs were common with the whole set: ANGPT1 and IL-1Ra were upregulated, and sRAGE, BMP6, GDF2, and Mn-SOD were downregulated. In addition, lymphotactin (XCL1) was upregulated and placenta growth factor (PIGF), alpha-L-iduronidase (IDUA), angiopoietin-1 receptor, soluble (sTie-2), and macrophage metalloelastase (MMP12) were downregulated ( , B). C. Subset normal IR- and BMI-matched: CVRPs in BMI-matched (overweight/obese BMI ≥ 26 kg/m 2 ) and normal IR-matched (HOMA-IR < 1.9) PCOS (n = 9) and non-PCOS (n = 6) women. In 2 out of 54 CVRPs, tissue factor (TF) and renin were upregulated in PCOS in this subset ( , C).
The dysregulated proteins in the whole set were subjected to stepwise multivariable logistic regression to model their association with PCOS. The model had a Nagelkerke R Square of 0.31 and the variables included were BMP-6, IL-1Ra, ANGPT1, sRAGE, and leptin. Higher BMP-6 and sRAGE were noted in the non-PCOS versus the PCOS group and hence a negative regression parameter was associated with PCOS (BMP-6: B = −1.0, p = 0.03 and sRAGE: B = −0.60, p = 0.003). As per the models, the higher odds of having PCOS among women were associated with higher levels of ANGPT1 (OR 1.79, 95% CI: 0.93–3.43; p = 0.07), IL-1Ra (OR 1.64, 95% CI 1.03–2.62, p = 0.03) and leptin (OR 1.84, 95% CI 1.23–2.77, p = 0.003). The dysregulated proteins in the BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) individuals were subjected to multivariable logistic regression to model their association with PCOS. The model had a Nagelkerke R Square of 0.365. The model indicated that participants who were overweight/obese with PCOS were more likely to have higher levels of ANGPT1 (OR 3.85, 95% CI: 1.05–13.35, p < 0.001), programmed cell death 1 ligand 2 (PD-L2) (OR 2.22, 95% CI: 0.78–8.07, p < 0.004), and IL1-Ra (OR 0.98, 95% CI 0.31–8.35, p = 0.004). Negative regression terms were associated with PIGF, sRAGE, and Dickkopf-related protein 1 (DKK1). The IR-matched (HOMA-IR < 1.9) plus BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) data subset failed to develop any supervised learning models.
ROC curve analysis was performed with the IR-matched (HOMA-IR < 1.9) plus BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) data subset to identify the CVRPs that could delineate PCOS in this subpopulation. The analysis showed that among the 54 CVRPs, renin was able to distinguish PCOS in this subset. The area under the curve (AUC) for renin was 0.86 (95% CI 0.65–1.078, p = 0.001). According to the ROC curves and Youden’s Index, the optimal cutoff value of renin expression level was 596.3 RFU, with 77.8% sensitivity and 99.9% specificity .
STRING 12.0 (Search Tool for the Retrieval of Interacting Genes) was used to visualize the known and predicted protein–protein interaction for proteins that were upregulated in the following populations: CVRPs in the whole set of participants (PCOS vs. non-PCOS) ( A) and CVRPs in BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) PCOS vs. non-PCOS ( B) ( https://string-db.org/ ; accessed on 10 October 2024) groups. The figures represent interactions between the upregulated CVRPs and their immediate interacting partners.
A comprehensive analysis identified eleven dysregulated cardiovascular risk proteins (CVRPs) in the whole set of participants (PCOS vs. non-PCOS) ( A) and eleven dysregulated CVRPs in the BMI-matched subset (overweight/obese, BMI ≥ 26 kg/m 2 ) PCOS vs. non-PCOS ( B) groups. Further investigation through functional enrichment and gene ontology (GO) analysis using the DAVID tool highlighted an increase in GO terms linked to the regulation of cytokines and inflammatory responses, critical pathways known to be actively dysregulated in PCOS and cardiovascular disease. These findings suggest a potential pivotal role for these proteins and pathways in the shared pathogenesis of PCOS and cardiovascular disease.
This research provides insights into the dysregulation of CVRPs in women with PCOS, especially in those who are overweight/obese. The dysregulated proteins reported here emphasize the intricate, though complex, interplay between metabolic and cardiovascular pathways in PCOS, implicating these proteins as contributors to the increased risk of CVD in these women. 4.1. Dysregulation in CVRPs in PCOS Women (Whole Set) In this exploratory investigation, 11 CVRPs were dysregulated in the entire PCOS group where IR and BMI were unmatched: leptin, IL-1Ra, PIGR, IL-18Ra, MIP-1a, and ANGPT1 were upregulated whereas sRAGE, BMP6, GDF2, Mn-SOD, and SLAF 5 were downregulated. Leptin, an adipocyte-derived pro-inflammatory adipokine, contributes to the low-grade inflammatory state in overweight/obese individuals and is implicated in CVD events, with hyperleptinemia linked to coronary heart disease and heart failure . Beyond its cardiovascular effects, leptin plays a crucial role in reproductive processes , highlighting its diverse impact on multiple physiological functions in the body. Elevated serum leptin levels were reported in overweight/obese women with PCOS, and are linked to hyperandrogenemia and IR, key features of the syndrome . Leptin exerts significant peripheral effects that may contribute to the development of cardiometabolic disorders by promoting vascular inflammation, increasing oxidative stress, and inducing hypertrophy of vascular smooth muscle cells . Interleukin 1 receptor antagonist (IL-1Ra) is a critical mediator of inflammatory processes that binds to the IL-1 receptor, blocking IL-1 alpha and beta without inducing signaling; IL-1Ra was found to be upregulated in our study, and is also elevated in nonobese PCOS patients . Studies have shown that IL-1Ra gene polymorphisms, particularly allele II in intron 2, are strongly associated with metabolic features of PCOS , and elevated IL-1Ra levels may predict impaired glucose metabolism regardless of BMI . IL-1Ra plays a significant role in the pathophysiology of PCOS and may contribute to CVD risk in these patients . Elevated levels of IL-1Ra in women with PCOS correlate with IR, obesity, and impaired glucose metabolism . Polymeric immunoglobulin receptor (PIGR) was upregulated in PCOS. PIGR is expressed in the intestine, bronchus, salivary glands, renal tubule, and uterus . PIGR is essential in mucosal immunity for transporting dimeric IgA (dIgA) across epithelial cells. However, its role in PCOS is unexplored. IL-18Ra, a pro-inflammatory cytokine, was elevated in women with PCOS. Elevated levels of IL-18 and its receptor were reported in women with PCOS, correlating with IR, obesity, and hyperandrogenism and it is implicated in the inflammatory processes that contribute to metabolic syndrome, a condition associated with an increased risk of cardiovascular events . An increase in MIP-1a was also observed in PCOS, in accordance with prior reports . Elevated levels of MIP-1a in PCOS activate the phosphatidylinositol 3-kinase/protein kinase B (PI3K/AKT) and mitogen-activated protein kinase (MAPK) signaling pathways, leading to the increased production of pro-inflammatory cytokines and enhanced inflammatory responses, potentially contributing to cardiovascular risk . ANGPT1 plays a significant role in the pathophysiology of PCOS and its associated cardiovascular risk . Additionally, treatment with ANGPT1 reduces the risk of diet-induced obesity . Our study found an increase in ANGPT1 expression in the PCOS overweight/obese cohort versus their non-PCOS counterparts, that agrees with other reports of elevated levels in PCOS, suggesting a compensatory mechanism in response to the heightened vascular permeability driven by other factors like vascular endothelial growth factor (VEGF) . sRAGE was decreased in women with PCOS in this study. sRAGE has an inverse relationship with AGEs and may serve as a protective factor against cardiovascular complications in PCOS . Decreased sRAGE levels in women with PCOS may exacerbate the harmful effects of AGEs, potentially contributing to long-term metabolic and cardiovascular risks mediated through chronic inflammation and IR . sRAGE may serve as a protective factor against the cardiovascular complications associated with PCOS by binding to AGEs and thus mitigating their harmful effects on vascular health . Bone morphogenetic protein 6 (BMP6) was found to be decreased here in the PCOS subjects though, in a study of circulating BMP6 levels using a less sensitive detection method, BMP6 was not found to be detectable . BMP6 is involved in regulating ovarian function, particularly in follicle development and oocyte maturation, by modulating intercellular communication within the ovary. Dysregulation of BMP6 signaling was linked to the pathogenesis of PCOS, contributing to ovulatory dysfunction and associated metabolic disturbances, which can elevate cardiovascular risk in affected women. Growth/differentiation factor 2 (GDF2), also known as bone morphogenetic protein 9 (BMP9), was found to be reduced in the PCOS versus control groups. GDF2 is involved in the regulation and control of ovarian folliculogenesis . Circulating BMP9 levels were found to correlate negatively with cardiovascular risk factors, such as hypertension and coronary heart disease . Lower levels of BMP9 are associated with an increased risk of these conditions, suggesting that BMP9 could serve as a potential biomarker for CVD progression in individuals, including those with metabolic disorders such as PCOS. Superoxide dismutase [Mn] (MnSOD) was found to be decreased in women with PCOS. Studies on serum SOD activity in PCOS patients have reported conflicting results , with some studies suggesting elevated SOD levels in PCOS , whilst others suggest the opposite . MnSOD plays a protective role by reducing superoxide levels in vascular tissues , protecting against CVD, as oxidative stress is a known contributor to cardiovascular pathology. In patients with PCOS, the risk of CVD is increased due to the associated IR and metabolic syndrome, and a reduction in MnSOD activity may be detrimental . SLAF5, also known as CD84, was downregulated in women with PCOS in our study. SLAF5, a homophilic cell surface glycoprotein, is primarily expressed at peak levels on macrophages, dendritic cells, platelets and, to a lesser extent, on immune cells such as B lymphocytes. There is a paucity of information about the role of SLAF5 in PCOS. CD84 is shown to be highly expressed in patients with chronic kidney disease (KD) with coronary arteritis . CD84 likely plays an important role in the pathogenesis of chronic inflammation, but it is unclear whether it plays a protective or a deleterious role. 4.2. Dysregulation of CVRPs in BMI-Matched PCOS Subset When only BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) candidates were considered, again 11 of the 54 CVRPs were dysregulated in obese/overweight PCOS individuals compared to their non-PCOS counterparts. Among these, ANGPT1, IL-1Ra, and XCL1 were upregulated whereas BMP6, PIGF, Mn-SOD, IDUA, GDF2, sTie-2, sRAGE, and MMP12 were downregulated. XCL1, also known as lymphotactin, was upregulated in PCOS individuals in this subset, and is a C-class chemokine produced by T cells and natural killer cells in response to inflammatory and infectious stimuli. It predominantly exerts its effects by binding to and activating the XCR1 receptor . There are no reports to date about this protein in the context of PCOS. ANGPT1 and IL-1Ra were again upregulated in the BMI-matched PCOS cohort and their roles in the pathophysiology of PCOS are noted above. Decreased levels of PIGF were found in BMI-matched PCOS women. Chen et al. investigated placental growth factor (PIGF), a protein that stimulates the growth and survival of endothelial cells under ischemic conditions, and showed that a high ratio of circulating PIGF to the cell stress marker TRAIL receptor-2 indicates a lower cardiovascular risk, indicative of the plausible protective action of PIGF in CVDs . IDUA was downregulated in the BMI-matched PCOS cohort. IDUA (α-L-iduronidase) is involved in the breakdown of glycosaminoglycans (GAGs) and its deficiency may lead to an accumulation of GAGs thereby negatively impacting cardiovascular health . There is no direct evidence of IDUA expression in relation to obesity and PCOS. Our study reports the downregulation of serum s-Tie2 in overweight/obese BMI-matched PCOS women. The soluble form of the Tie2 receptor (s-Tie2) binds to angiopoietins and is essential for vascular stability and remodeling. Its direct role in obesity and PCOS have not previously been reported though Scotti et al. reported no difference in sTie2 from follicular fluid in PCOS versus control individuals . MMP12 was downregulated in overweight/obese BMI-matched PCOS women. MMP12 degrades elastin and promotes macrophage recruitment, increasing the risk for CVDs . MMP12 expression is associated with metabolic dysfunction and, in contrast to our findings here, was reported to be elevated in obesity, which contributes to alterations in the extracellular matrix (ECM) . PCOS women are at an increased risk of developing preeclampsia, a condition that shares common cardiovascular markers, such as MMP12. This was demonstrated for the angiogenic marker CD93, which has a pathogenic role both in the context of obesity and cardiovascular disease , as well as in preeclampsia, emphasizing the relevance of these markers in broader pathological contexts. sRAGE, BMP6, GDF2, and Mn-SOD were downregulated in overweight/obese BMI-matched PCOS women. These CVRPs play a protective role against CVDs and their downregulation in PCOS women is indicative of increased cardiovascular risk. 4.3. Dysregulation of CVRPs in IR-Matched Plus BMI-Matched PCOS Subset When both BMI (overweight/obese, BMI ≥ 26 kg/m 2 ) and IR (HOMA-IR < 1.9) were accounted for, only two proteins, tissue factor (TF) and renin, were dysregulated (upregulated), indicating that both TF and renin were independent of both obesity and insulin resistance, suggesting that these may be CVRPs inherent to PCOS. TF, a transmembrane glycoprotein, serves as the primary initiator of blood coagulation and is induced on monocytes and endothelial cells by inflammatory stimuli such as endotoxin, tumor necrosis factor and IL-113 . Elevated TF levels are associated with increased cardiovascular risk, acute coronary syndrome, and PCOS. In accordance with our results, the increased expression of TF in PCOS is independent of obesity . In the BMI- and IR-matched subjects, renin was upregulated in PCOS women. Renin is the first limiting step in the Renin Angiotenin Aldosterone System (RAAS) and is also a biomarker for CVD . Renin plays a significant role in the metabolic abnormalities observed in polycystic ovary syndrome (PCOS), particularly in relation to IR, as women with PCOS exhibit higher renin levels that positively correlate with insulin concentrations and HOMA-IR . This suggests a complex interplay between RAAS and insulin signaling pathways . Functional enrichment analysis of the dysregulated proteins revealed significant pathways linked to cytokine production regulation, endothelial cell proliferation, and inflammatory responses. This agrees with the chronic inflammation and vascular dysfunction inherent to PCOS whilst providing a link between PCOS and CVDs . Six dysregulated proteins were common between the whole PCOS cohort and the BMI-matched PCOS cohort, of which ANGPT1 and IL-1Ra were upregulated whereas sRAGE, BMP6, GDF2, and Mn-SOD were downregulated. Thus, CVRPs may serve as potential biomarkers for cardiovascular risk in overweight/obese women with PCOS. Of particular interest is the role of ANGPT1 and leptin, which are associated with inflammation and vascular function. Four CVRPs were positively associated with obesity irrespective of age, leptin , IL-1Ra , IL-18 Ra , and MIP-1a , and all four were upregulated in PCOS. Conversely, sRAGE , Mn-SOD BMP6, and GDF2 were downregulated in the overweight/obese PCOS cohort and all of these were reported to have reduced expression in obesity with metabolic syndrome; it is therefore not surprising that the levels of these proteins were reduced in the overweight/ obese subset of women, both with and without PCOS, though it appears that PCOS caused further downregulation. A multivariate regression analysis model was used here to investigate the link between specific CVRPs and PCOS. The analysis indicates that elevated levels of ANGPT1, IL-1Ra, and leptin are associated with a higher risk for PCOS. These proteins have crucial roles in pathways related to inflammation and metabolic dysfunction in PCOS. Conversely, a negative regression parameter was associated with BMP6 and sRAGE in PCOS, indicating a compromised regulatory mechanism in PCOS. Renin was found to distinguish PCOS in the BMI- and IR-matched women, as seen in the ROC curve analysis, suggesting its value as a biomarker in this particular subset. STRING analysis of the 11 dysregulated proteins from the whole PCOS cohort and the BMI-matched PCOS cohort indicate that, although these proteins have limited direct interactions, they are well connected through their immediate binding partners, such as interleukin 10 (IL10), C-C motif chemokine ligand 3 (CCL3) and interleukin 1 alpha (IL1A), which are reported as active members in cytokine regulation, cytokine–cytokine interaction, inflammatory responses and, for example, angiopoietin 2 (ANGPT2), angiopoietin 4 (ANGPT4), and angiogenin (ANG) that have specific roles in vascular function. Thus, the dysregulated proteins identified here do not act in isolation but rather as part of a broader network influencing metabolic and cardiovascular health. The co-expression and interaction patterns suggest that targeting these protein pathways could be a viable strategy for mitigating cardiovascular risk associated with PCOS. The results in this study differed to the CVRPs that were reported in a nonobese PCOS study , likely due to the influence of the increased weight that was associated with the CVRPs reported here and that would have not been a factor in that study. In addition, the women in this study were all PCOS phenotype A and it is unclear what the phenotype was in the nonobese study, though these patients tend to be phenotype B or C, and C is less frequently associated with an increased cardiovascular risk . In view of the potential confounding effects of over-the-counter medication (such as anti-inflammatory agents and herbal preparations), these were specifically excluded in the population studied to ensure that the protein changes reported were not pharmacologically exaggerated or suppressed. The limitations of this study include its small sample size and the fact that it was conducted solely on a Caucasian population, which may restrict the generalizability of the findings. To confirm these results, similar studies should be conducted in diverse ethnic groups. Additionally, further molecular-level analyses are necessary to establish the potential role of predictive protein candidates, such as IL1-Ra and leptin, as clinical indicators of PCOS in overweight individuals. In addition, further studies on CV risk in PCOS should also account for the PCOS phenotype.
In this exploratory investigation, 11 CVRPs were dysregulated in the entire PCOS group where IR and BMI were unmatched: leptin, IL-1Ra, PIGR, IL-18Ra, MIP-1a, and ANGPT1 were upregulated whereas sRAGE, BMP6, GDF2, Mn-SOD, and SLAF 5 were downregulated. Leptin, an adipocyte-derived pro-inflammatory adipokine, contributes to the low-grade inflammatory state in overweight/obese individuals and is implicated in CVD events, with hyperleptinemia linked to coronary heart disease and heart failure . Beyond its cardiovascular effects, leptin plays a crucial role in reproductive processes , highlighting its diverse impact on multiple physiological functions in the body. Elevated serum leptin levels were reported in overweight/obese women with PCOS, and are linked to hyperandrogenemia and IR, key features of the syndrome . Leptin exerts significant peripheral effects that may contribute to the development of cardiometabolic disorders by promoting vascular inflammation, increasing oxidative stress, and inducing hypertrophy of vascular smooth muscle cells . Interleukin 1 receptor antagonist (IL-1Ra) is a critical mediator of inflammatory processes that binds to the IL-1 receptor, blocking IL-1 alpha and beta without inducing signaling; IL-1Ra was found to be upregulated in our study, and is also elevated in nonobese PCOS patients . Studies have shown that IL-1Ra gene polymorphisms, particularly allele II in intron 2, are strongly associated with metabolic features of PCOS , and elevated IL-1Ra levels may predict impaired glucose metabolism regardless of BMI . IL-1Ra plays a significant role in the pathophysiology of PCOS and may contribute to CVD risk in these patients . Elevated levels of IL-1Ra in women with PCOS correlate with IR, obesity, and impaired glucose metabolism . Polymeric immunoglobulin receptor (PIGR) was upregulated in PCOS. PIGR is expressed in the intestine, bronchus, salivary glands, renal tubule, and uterus . PIGR is essential in mucosal immunity for transporting dimeric IgA (dIgA) across epithelial cells. However, its role in PCOS is unexplored. IL-18Ra, a pro-inflammatory cytokine, was elevated in women with PCOS. Elevated levels of IL-18 and its receptor were reported in women with PCOS, correlating with IR, obesity, and hyperandrogenism and it is implicated in the inflammatory processes that contribute to metabolic syndrome, a condition associated with an increased risk of cardiovascular events . An increase in MIP-1a was also observed in PCOS, in accordance with prior reports . Elevated levels of MIP-1a in PCOS activate the phosphatidylinositol 3-kinase/protein kinase B (PI3K/AKT) and mitogen-activated protein kinase (MAPK) signaling pathways, leading to the increased production of pro-inflammatory cytokines and enhanced inflammatory responses, potentially contributing to cardiovascular risk . ANGPT1 plays a significant role in the pathophysiology of PCOS and its associated cardiovascular risk . Additionally, treatment with ANGPT1 reduces the risk of diet-induced obesity . Our study found an increase in ANGPT1 expression in the PCOS overweight/obese cohort versus their non-PCOS counterparts, that agrees with other reports of elevated levels in PCOS, suggesting a compensatory mechanism in response to the heightened vascular permeability driven by other factors like vascular endothelial growth factor (VEGF) . sRAGE was decreased in women with PCOS in this study. sRAGE has an inverse relationship with AGEs and may serve as a protective factor against cardiovascular complications in PCOS . Decreased sRAGE levels in women with PCOS may exacerbate the harmful effects of AGEs, potentially contributing to long-term metabolic and cardiovascular risks mediated through chronic inflammation and IR . sRAGE may serve as a protective factor against the cardiovascular complications associated with PCOS by binding to AGEs and thus mitigating their harmful effects on vascular health . Bone morphogenetic protein 6 (BMP6) was found to be decreased here in the PCOS subjects though, in a study of circulating BMP6 levels using a less sensitive detection method, BMP6 was not found to be detectable . BMP6 is involved in regulating ovarian function, particularly in follicle development and oocyte maturation, by modulating intercellular communication within the ovary. Dysregulation of BMP6 signaling was linked to the pathogenesis of PCOS, contributing to ovulatory dysfunction and associated metabolic disturbances, which can elevate cardiovascular risk in affected women. Growth/differentiation factor 2 (GDF2), also known as bone morphogenetic protein 9 (BMP9), was found to be reduced in the PCOS versus control groups. GDF2 is involved in the regulation and control of ovarian folliculogenesis . Circulating BMP9 levels were found to correlate negatively with cardiovascular risk factors, such as hypertension and coronary heart disease . Lower levels of BMP9 are associated with an increased risk of these conditions, suggesting that BMP9 could serve as a potential biomarker for CVD progression in individuals, including those with metabolic disorders such as PCOS. Superoxide dismutase [Mn] (MnSOD) was found to be decreased in women with PCOS. Studies on serum SOD activity in PCOS patients have reported conflicting results , with some studies suggesting elevated SOD levels in PCOS , whilst others suggest the opposite . MnSOD plays a protective role by reducing superoxide levels in vascular tissues , protecting against CVD, as oxidative stress is a known contributor to cardiovascular pathology. In patients with PCOS, the risk of CVD is increased due to the associated IR and metabolic syndrome, and a reduction in MnSOD activity may be detrimental . SLAF5, also known as CD84, was downregulated in women with PCOS in our study. SLAF5, a homophilic cell surface glycoprotein, is primarily expressed at peak levels on macrophages, dendritic cells, platelets and, to a lesser extent, on immune cells such as B lymphocytes. There is a paucity of information about the role of SLAF5 in PCOS. CD84 is shown to be highly expressed in patients with chronic kidney disease (KD) with coronary arteritis . CD84 likely plays an important role in the pathogenesis of chronic inflammation, but it is unclear whether it plays a protective or a deleterious role.
When only BMI-matched (overweight/obese, BMI ≥ 26 kg/m 2 ) candidates were considered, again 11 of the 54 CVRPs were dysregulated in obese/overweight PCOS individuals compared to their non-PCOS counterparts. Among these, ANGPT1, IL-1Ra, and XCL1 were upregulated whereas BMP6, PIGF, Mn-SOD, IDUA, GDF2, sTie-2, sRAGE, and MMP12 were downregulated. XCL1, also known as lymphotactin, was upregulated in PCOS individuals in this subset, and is a C-class chemokine produced by T cells and natural killer cells in response to inflammatory and infectious stimuli. It predominantly exerts its effects by binding to and activating the XCR1 receptor . There are no reports to date about this protein in the context of PCOS. ANGPT1 and IL-1Ra were again upregulated in the BMI-matched PCOS cohort and their roles in the pathophysiology of PCOS are noted above. Decreased levels of PIGF were found in BMI-matched PCOS women. Chen et al. investigated placental growth factor (PIGF), a protein that stimulates the growth and survival of endothelial cells under ischemic conditions, and showed that a high ratio of circulating PIGF to the cell stress marker TRAIL receptor-2 indicates a lower cardiovascular risk, indicative of the plausible protective action of PIGF in CVDs . IDUA was downregulated in the BMI-matched PCOS cohort. IDUA (α-L-iduronidase) is involved in the breakdown of glycosaminoglycans (GAGs) and its deficiency may lead to an accumulation of GAGs thereby negatively impacting cardiovascular health . There is no direct evidence of IDUA expression in relation to obesity and PCOS. Our study reports the downregulation of serum s-Tie2 in overweight/obese BMI-matched PCOS women. The soluble form of the Tie2 receptor (s-Tie2) binds to angiopoietins and is essential for vascular stability and remodeling. Its direct role in obesity and PCOS have not previously been reported though Scotti et al. reported no difference in sTie2 from follicular fluid in PCOS versus control individuals . MMP12 was downregulated in overweight/obese BMI-matched PCOS women. MMP12 degrades elastin and promotes macrophage recruitment, increasing the risk for CVDs . MMP12 expression is associated with metabolic dysfunction and, in contrast to our findings here, was reported to be elevated in obesity, which contributes to alterations in the extracellular matrix (ECM) . PCOS women are at an increased risk of developing preeclampsia, a condition that shares common cardiovascular markers, such as MMP12. This was demonstrated for the angiogenic marker CD93, which has a pathogenic role both in the context of obesity and cardiovascular disease , as well as in preeclampsia, emphasizing the relevance of these markers in broader pathological contexts. sRAGE, BMP6, GDF2, and Mn-SOD were downregulated in overweight/obese BMI-matched PCOS women. These CVRPs play a protective role against CVDs and their downregulation in PCOS women is indicative of increased cardiovascular risk.
When both BMI (overweight/obese, BMI ≥ 26 kg/m 2 ) and IR (HOMA-IR < 1.9) were accounted for, only two proteins, tissue factor (TF) and renin, were dysregulated (upregulated), indicating that both TF and renin were independent of both obesity and insulin resistance, suggesting that these may be CVRPs inherent to PCOS. TF, a transmembrane glycoprotein, serves as the primary initiator of blood coagulation and is induced on monocytes and endothelial cells by inflammatory stimuli such as endotoxin, tumor necrosis factor and IL-113 . Elevated TF levels are associated with increased cardiovascular risk, acute coronary syndrome, and PCOS. In accordance with our results, the increased expression of TF in PCOS is independent of obesity . In the BMI- and IR-matched subjects, renin was upregulated in PCOS women. Renin is the first limiting step in the Renin Angiotenin Aldosterone System (RAAS) and is also a biomarker for CVD . Renin plays a significant role in the metabolic abnormalities observed in polycystic ovary syndrome (PCOS), particularly in relation to IR, as women with PCOS exhibit higher renin levels that positively correlate with insulin concentrations and HOMA-IR . This suggests a complex interplay between RAAS and insulin signaling pathways . Functional enrichment analysis of the dysregulated proteins revealed significant pathways linked to cytokine production regulation, endothelial cell proliferation, and inflammatory responses. This agrees with the chronic inflammation and vascular dysfunction inherent to PCOS whilst providing a link between PCOS and CVDs . Six dysregulated proteins were common between the whole PCOS cohort and the BMI-matched PCOS cohort, of which ANGPT1 and IL-1Ra were upregulated whereas sRAGE, BMP6, GDF2, and Mn-SOD were downregulated. Thus, CVRPs may serve as potential biomarkers for cardiovascular risk in overweight/obese women with PCOS. Of particular interest is the role of ANGPT1 and leptin, which are associated with inflammation and vascular function. Four CVRPs were positively associated with obesity irrespective of age, leptin , IL-1Ra , IL-18 Ra , and MIP-1a , and all four were upregulated in PCOS. Conversely, sRAGE , Mn-SOD BMP6, and GDF2 were downregulated in the overweight/obese PCOS cohort and all of these were reported to have reduced expression in obesity with metabolic syndrome; it is therefore not surprising that the levels of these proteins were reduced in the overweight/ obese subset of women, both with and without PCOS, though it appears that PCOS caused further downregulation. A multivariate regression analysis model was used here to investigate the link between specific CVRPs and PCOS. The analysis indicates that elevated levels of ANGPT1, IL-1Ra, and leptin are associated with a higher risk for PCOS. These proteins have crucial roles in pathways related to inflammation and metabolic dysfunction in PCOS. Conversely, a negative regression parameter was associated with BMP6 and sRAGE in PCOS, indicating a compromised regulatory mechanism in PCOS. Renin was found to distinguish PCOS in the BMI- and IR-matched women, as seen in the ROC curve analysis, suggesting its value as a biomarker in this particular subset. STRING analysis of the 11 dysregulated proteins from the whole PCOS cohort and the BMI-matched PCOS cohort indicate that, although these proteins have limited direct interactions, they are well connected through their immediate binding partners, such as interleukin 10 (IL10), C-C motif chemokine ligand 3 (CCL3) and interleukin 1 alpha (IL1A), which are reported as active members in cytokine regulation, cytokine–cytokine interaction, inflammatory responses and, for example, angiopoietin 2 (ANGPT2), angiopoietin 4 (ANGPT4), and angiogenin (ANG) that have specific roles in vascular function. Thus, the dysregulated proteins identified here do not act in isolation but rather as part of a broader network influencing metabolic and cardiovascular health. The co-expression and interaction patterns suggest that targeting these protein pathways could be a viable strategy for mitigating cardiovascular risk associated with PCOS. The results in this study differed to the CVRPs that were reported in a nonobese PCOS study , likely due to the influence of the increased weight that was associated with the CVRPs reported here and that would have not been a factor in that study. In addition, the women in this study were all PCOS phenotype A and it is unclear what the phenotype was in the nonobese study, though these patients tend to be phenotype B or C, and C is less frequently associated with an increased cardiovascular risk . In view of the potential confounding effects of over-the-counter medication (such as anti-inflammatory agents and herbal preparations), these were specifically excluded in the population studied to ensure that the protein changes reported were not pharmacologically exaggerated or suppressed. The limitations of this study include its small sample size and the fact that it was conducted solely on a Caucasian population, which may restrict the generalizability of the findings. To confirm these results, similar studies should be conducted in diverse ethnic groups. Additionally, further molecular-level analyses are necessary to establish the potential role of predictive protein candidates, such as IL1-Ra and leptin, as clinical indicators of PCOS in overweight individuals. In addition, further studies on CV risk in PCOS should also account for the PCOS phenotype.
In conclusion, a combination of upregulated obesity-related CVRPs (ANGPT1, IL, 1Ra, XCL1) and downregulated cardioprotective proteins (sRAGE, BMP6, Mn-SOD, GDF2) in PCOS may contribute to the increased risk of CVDs in overweight women with PCOS. The observed upregulation of TF and renin in the BMI- and IR-matched PCOS subgroup, despite the limited sample size, suggests a potential association with cardiovascular risks in these patients, warranting further investigation in larger cohorts.
|
COVID-19: a further step forward in the long journey of Occupational Medicine | 2dd43e8e-e5d1-4ff8-8a8a-3d829e0911a5 | 8223939 | Preventive Medicine[mh] | |
Defective control of pre–messenger RNA splicing in human disease | 9723eb97-1979-4cd7-baf3-745cf1be3caf | 4700483 | Pathology[mh] | Nearly all protein-encoding human genes have multiple exons that are combined in alternative ways to produce distinct mRNAs, often in an organ-specific, tissue-specific, or cell type–specific manner. Although documenting the function of this vast collection of splice variants is a challenging endeavor, the regulated production of splice variants is required for important functions encompassing virtually all biological processes. The growing recognition of splicing and alternative splicing as critical contributors to gene expression was accompanied by many new examples of how splicing defects are associated with human disease. As several excellent reviews have reported on this expanding, and sometimes causal, relationship , the goal of this review is to highlight recent efforts in understanding how disease-associated mutations disrupt regulation of splicing. After an overview of basic concepts in splicing and splicing control, we discuss recently described defects in the control of splicing that suggest contributions to myelodysplastic syndromes (MDS), cancer, and neuropathologies.
Intron removal is performed by the spliceosome , whose assembly starts with the recognition of the 5′ splice site (5′ss), the 3′ splice site (3′ss), and the branch site by U1 small nuclear RNP (snRNP), U2AF, and U2 snRNP, respectively. Along with the U4/U6.U5 tri-snRNP, >100 proteins are recruited to reconfigure the interactions between small nuclear RNAs, between small nuclear RNAs and the pre-mRNA, and to position nucleotides for two successive nucleophilic attacks that produce the ligated exons and the excised intron . Fewer than 1,000 introns (i.e., ∼0.3%) are removed by the minor spliceosome, which uses distinct snRNPs (U11, U12, U4atac, and U6atac) but shares U5 and most proteins with the major spliceosome . Definition of intron borders often requires the collaboration of RNA-binding proteins (RBPs), such as serine arginine (SR) and heterogeneous nuclear RNPs (hnRNPs), which interact with specific exonic or intronic sequence elements usually located in the vicinity of splice sites. As the combinatorial arrangement of these interactions helps or antagonizes the early steps of spliceosome assembly , one ambitious goal is to determine how cell-, tissue-, and disease-specific variations in the expression of these splicing regulators and their association near splice sites induce specific changes in alternative splicing . This challenge is compounded by the fact that only a fraction of the >1,000 RBPs has been studied and that all RBPs have splice variants, usually of undetermined function. Moreover, the function of RBPs is often modulated by posttranslational modifications that occur in response to environmental insults and metabolic cues . An extra layer of complexity to our view of splicing control is added when we consider that experimentally induced decreases in the levels of core spliceosomal components also affect splice site selection . Indeed, reducing the level of dozens of spliceosomal components, including SF3B1, U2AF, and tri-snRNP components, affects the production of splice variants involved in apoptosis and cell proliferation . Although it remains unclear whether variation in the levels and activity of generic factors is used to control splicing decisions under normal conditions, deficiencies in tri-snRNP proteins or in proteins involved in snRNP biogenesis are now frequently associated with aberrant splicing in disease (e.g., PRPF proteins in retinitis pigmentosa , the SMN protein in spinal muscular atrophy [SMA; ], and SF3B1, SRSF2, and U2AF1 in MDS [see Spliceosomal proteins in MDS section]). How mutations in generic splicing factors confer gene- and cell type–specific effects is an intriguing question. The suboptimal features of some introns that dictate this sensitivity may normally be mitigated by the high concentration or activity of generic factors. Consistent with this view, repression of PRPF8 alters the splicing of introns with weak 5′ss . Thus, deficiencies in the activity of generic spliceosome components may compromise the splicing of a subset of introns, contributing to the onset of disease. As splicing decisions are usually made while the pre-mRNA is still being transcribed , regulatory links with transcription and chromatin structure take place at several levels. First, spliceosome components and regulators are recruited to the transcription machinery (e.g., the C-terminal domain of RNA polymerase II) to facilitate their transfer onto the emerging nascent pre-mRNA . Second, the speed of the elongating polymerase provides a kinetic window for the assembly of enhancer or repressor complexes that influence commitment between competing pairs of splice sites . Third, posttranslational modifications of histones and chromatin remodeling factors impact the speed of transcription as well as the recruitment of adapters that interact with splicing regulators . Notably, histone modifications in specific chromatin regions can be triggered by Argonaute proteins bound to endogenous or exogenously provided small RNAs . Long noncoding RNAs (lncRNAs), whose expressions vary in human diseases (e.g., MALAT-1 in cancer), may also contribute to splicing control by interacting with splicing factors to regulate their availability, or by orchestrating local epigenetic modifications that impact the speed of transcription or the recruitment of adapters .
More than 200 human diseases, including progeria and some forms of breast cancer and cystic fibrosis, are caused by point mutations that affect pre-mRNA splicing by destroying or weakening splice sites, or activating cryptic ones , thereby producing mRNAs that encode defective proteins or that are targets for nonsense-mediated mRNA decay (NMD). Splicing defects can also lead to the cotranscriptional degradation of nascent pre-mRNAs . A splice site mutation in BRAF is associated with resistance to the anticancer agent vemurafenib, but inhibitors of the generic splicing factor SF3B1 decrease the production of the mutation-induced BRAF variant and inhibit drug-resistant cell proliferation . Splice site variations can also have health-positive effects, as shown recently for a variant of LDLR that lowers non–high density lipoprotein cholesterol and protects against coronary artery disease . In addition to mutations at splicing signals themselves, mutations that destroy silencer or enhancer elements form another important group of disease-causing alterations that impact alternative splicing . Because more than half of the nucleotides in an exon may be part of splicing regulatory motifs , synonymous exon mutations, as well as an undetermined number of intron mutations, may further contribute to splicing misregulation that leads to disease. A recent computational analysis relying on RNA sequencing data from normal and disease samples and using >650,000 single-nucleotide variations (SNVs) identified >10,000 intronic and 70,000 missense and synonymous exonic SNVs occurring in splicing regulatory motifs that linked potential splicing defects with disease . Notably, the computational tool developed for this analysis predicted, with tantalizing accuracy, the impact of mutation on the direction and amplitude of splicing shifts associated with SMA and hereditary colorectal cancer and identified several intronic autism-associated SNVs with a high potential of splicing impact .
Disease-causing mutations in intronic or exonic control elements generally affect splicing by perturbing the binding of regulatory proteins that normally recognize them. The activity of the splicing regulators themselves can also be altered in disease. Changes in the nuclear level of regulators, including RBFOX2, hnRNP, and SR proteins, often occur in cancer . Although these changes frequently produce splice variants that affect cell cycle control, apoptosis, cell motility, and invasion, the molecular mechanisms that lead to these alterations and to specific downstream events that promote cancer remain largely unclear . Another way to alter the activity of splicing regulators is through sequestration. This is the case in DM1 and DM2 myotonic dystrophies, where muscleblind-like (MBNL) proteins are recruited to mRNAs carrying expansions of CUG and CCUG repeats, respectively. This sequestration compromises MBNL binding to normal RNA targets, deregulates the expression of CUGBP1, and alters alternative splicing of hundreds of transcripts not only in muscle tissues but also in the brain . The formation of cytoplasmic aggregates, possibly also triggered by mRNAs carrying nucleotide expansions, is frequently associated with neuropathological diseases such as amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD; see Splicing control defects in neuropathological and muscle-related disease section). In other instances, particularly in cancer, the localization and/or activity of splicing factors are misregulated by posttranslational modifications, e.g., phosphorylation . Mutation of generic spliceosomal components is also becoming a recurrent theme in disease, and recent advances in this area are mentioned throughout our review. Likewise, with the increasing awareness that splicing decisions are coordinated with transcription, and thus with processes that modify chromatin, splicing alterations provoked by disease-associated lncRNAs and chromatin-modifying enzymes are likely to become an emerging focus of inquiry.
Here, we highlight recent work that has focused on the role of spliceosomal components in MDS, a heterogeneous group of disorders that affect hematopoietic progenitor cells and the production of different types of blood cells. MDS often progress to fully malignant acute myeloid leukemia (AML) with the abnormal accumulation of hematopoietic precursors arrested at an early stage of differentiation. We then provide examples of network restructuring of alternative splicing regulators that have been more solidly associated with carcinogenesis or that may constitute new concepts that link splicing factors with the emergence and maintenance of cancer.
Somatic heterozygous mutations in any of the spliceosomal proteins SF3B1, SRSF2, U2AF1, and the U2AF-related gene ZRSR2 occur in >50% of all MDS patients . No homozygous mutations have been described, and almost all mutations are missense, usually occurring at conserved positions. Bone marrow and cancer cells harboring these mutations display splicing abnormalities . This may be a direct consequence of the specific mutations because the RNAi-mediated depletion of wild-type SF3B1, SRSF2, and U2AF1 in a variety of cell types or the expression of mutated proteins in nonhematopoietic and cancer cell lines also disrupts alternative splicing . Here, we will summarize a set of studies that provide tantalizing insight into how mutated SRSF2, SF3B1, U2AF1, and ZRSF2 affect splicing programs and alter hematopoiesis in mice and MDS patients . SRSF2 Notably, telomerase-negative mice with short telomeres that induce a persistent DNA damage response (DDR) present hematopoietic defects that recapitulate the clinical features of human MDS . Moreover, this telomere deficiency is associated with a decrease in the level of splicing factors that are frequently mutated in MDS (e.g., SRSF2, U2AF2, SF3B2, and SF3A3). Progenitor cells with deficient telomeres produce defective transcripts encoding components involved in DNA repair and chromatin structure. One splicing change reduces the level of the DNA methyl transferase DNMT3a, whose frequent mutation in MDS patients contributes to rapid progression to AML . SRSF2 is a splicing regulator that contributes to both generic and alternative splicing . The impact of short telomeres on the expression of SRSF2 inspired to create SRSF2-haploinsufficient mice. Remarkably, these mice display impaired erythroid differentiation and express several of the defective alternative splicing events caused by telomere dysfunction. Further, they aberrantly splice transcripts encoding components involved in telomere maintenance, potentially providing a feedback loop that may elicit more splicing defects . Importantly, the MDS-associated P95H mutation in SRSF2 shifts the affinity of SRSF2 to a subset of binding sites , thus providing an explanation for the fact that the recapitulation of splicing defects observed in SRSF2-haploinsufficient mice is only partial . CD34 + hematopoietic stem cells from MDS patients with the P95H mutation have defects in the production of splice variants involved in telomere maintenance, DNA repair, and chromatin remodeling . Finally, murine bone marrow cells expressing the SRSF2-P95H mutant display features that are characteristic of MDS, including increased proliferation of progenitor cells and impaired differentiation . One of the P95H-mediated splicing alterations in mice reduces the expression of the histone methyl transferase EZH2, an outcome also occurring in human cells expressing mutant SRSF2. Strikingly, restoring expression of EZH2 in SRSF2 mutant mice partially rescues the hematopoietic defect . Overall, these studies provide strong evidence that MDS-associated mutations in SRSF2 affect the production of splice variants involved in chromatin structure that in turn elicit hematopoietic defects. SF3B1 SF3B1 is a U2 snRNP–associated protein involved in branch point selection . The depletion of SF3B1 impairs the growth and the differentiation of myeloid cell lines . SF3B1 haploinsufficiency in mice compromises the repopulating ability of hematopoietic stem cells, but is not sufficient to induce MDS . Decreasing the levels of SF3B1 in myeloid cell lines alters the alternative splicing of transcripts encoding components involved in apoptosis and cell cycle control. Interestingly, in bone marrow cells and progenitor bone marrow stem cells from SF3B1-mutated MDS patients, the expression and splicing of genes/transcripts associated with mitochondrial and heme-related functions are altered, providing a link with the abnormal iron homeostasis observed in MDS patients . Notably, iron homeostasis influences alternative splicing by modulating the activity of SRSF7 . Interestingly, it has been observed that SRSF7 is itself abnormally spliced in MDS patients carrying the SF3B1 mutation . We speculate that this defective splicing may be responsible, at least in part, for the noted heme deficiency in MDS patients. SF3B1 is also part of a complex with BCLAF1, U2AF, and PRPF8 that is recruited to chromatin-bound BRCA1 to stimulate the splicing of transcripts encoding factors involved in DNA repair and the DDR . In line with this finding, several DNA repair and DDR genes (e.g., ABL1 , BIRC2 , and NUMA1 ) produce aberrantly spliced transcripts in cells of patients with SF3B1 mutations . Interestingly, one splicing alteration in these patient cells occurs in EZH1, a functional homologue of the histone methyl transferase EZH2, which is also defectively spliced in SRSF2-mutated cells and contributes to the MDS phenotype . Notably, the expression and alternative splicing of transcripts encoding RNA-processing factors, including PRPF8 and U2AF2, are also affected in SF3B1-mutated cells . This observation is important because mutations in PRPF8 and U2AF2 are found in MDS patients . However, although PRPF8 mutations are associated with alternative splicing defects , U2AF2 mutations appear neutral . Overall, these results suggest that SF3B1 mutations alter the splicing of transcripts involved in chromatin structure, DNA repair, and the DDR, thereby possibly providing an explanation for the accumulation of DNA damage in hematopoietic progenitor cells of MDS patients . A function for SF3B1 in splice site selection has recently been associated with a specific interaction with histone marks that are enriched in exons . Although SF3B1 mutations often occur in the C-terminal HEAT repeats involved in protein–protein interactions, it remains to be shown whether these mutations affect the recruitment of SF3B1 to chromatin. If they do, combining a mutated SF3B1 with chromatin modification defects may amplify splicing alterations, gradually leading to more detrimental hematopoietic deficiencies. U2AF1 U2AF1 is the smaller of two proteins that make up the U2AF heterodimer implicated in generic 3′ss recognition. Although the U2AF1-S34F mutation elicits hematopoietic abnormalities in mice that compromise the repopulating ability of stem cells, it does not elicit MDS . Many splicing defects in MDS patients with U2AF1 mutations occur in transcripts that encode components involved in cell cycle and splicing control . Expression of mutated U2AF1 proteins in a human erythroleukemic cell line causes thousands of splicing alterations, including some in transcripts encoding components involved in DNA methylation (e.g., DNMT3B, also affected by mutations in SF3B1), DDR, and apoptosis . Different U2AF1 mutations alter its binding to 3′ss in different ways and lead to distinct yet overlapping splicing defects . A meta-transcriptome analysis using samples from U2AF1-S34F mutant mice, AML patients with U2AF1 mutations, and primary bone marrow cells overexpressing U2AF1-S34F uncovered common splicing alterations in transcripts encoding splicing proteins and components that are mutated in MDS and AML, or that are involved in hematopoietic stem cell function. These observations provide strong support to the view that mutated U2AF1 elicits abnormal hematopoiesis . ZRSR2 ZRSR2 has been implicated in the splicing of introns that use the U12-dependent minor spliceosome in transcripts encoding cancer-relevant proteins such as PTEN, MAPK1, MAPK3, BRAF, and E2F2 . MDS-associated mutations in ZRSR2 are often inactivating, and depleting ZRSR2 reduces the growth and clonogenic potential of leukemia cell lines and alters the differentiation potential of human CD34 + bone marrow cells . Overall, the studies mentioned above suggest that alternative splicing likely makes a crucial contribution to the clinical evolution of MDS . Mutations in SRSF2, U2AF1, and SF3B1 may elicit a shared set of splicing alterations that trigger common hematopoietic defects and predispose stem cells to cancer development. The insight gained by studying the contribution of mutated splicing factors to MDS is likely to benefit our understanding of how mutations in splicing factors lead to cancer in general because mutations in SF3B1, U2AF1, and SRSF2 are also found in a variety of solid tumors . Although a recent compilation indicates that splicing factor genes are frequently mutated in different types of cancer , a more extensive characterization of the functional impact of these mutations will be required to determine whether these alterations preferentially contribute to specific types of cancer.
Notably, telomerase-negative mice with short telomeres that induce a persistent DNA damage response (DDR) present hematopoietic defects that recapitulate the clinical features of human MDS . Moreover, this telomere deficiency is associated with a decrease in the level of splicing factors that are frequently mutated in MDS (e.g., SRSF2, U2AF2, SF3B2, and SF3A3). Progenitor cells with deficient telomeres produce defective transcripts encoding components involved in DNA repair and chromatin structure. One splicing change reduces the level of the DNA methyl transferase DNMT3a, whose frequent mutation in MDS patients contributes to rapid progression to AML . SRSF2 is a splicing regulator that contributes to both generic and alternative splicing . The impact of short telomeres on the expression of SRSF2 inspired to create SRSF2-haploinsufficient mice. Remarkably, these mice display impaired erythroid differentiation and express several of the defective alternative splicing events caused by telomere dysfunction. Further, they aberrantly splice transcripts encoding components involved in telomere maintenance, potentially providing a feedback loop that may elicit more splicing defects . Importantly, the MDS-associated P95H mutation in SRSF2 shifts the affinity of SRSF2 to a subset of binding sites , thus providing an explanation for the fact that the recapitulation of splicing defects observed in SRSF2-haploinsufficient mice is only partial . CD34 + hematopoietic stem cells from MDS patients with the P95H mutation have defects in the production of splice variants involved in telomere maintenance, DNA repair, and chromatin remodeling . Finally, murine bone marrow cells expressing the SRSF2-P95H mutant display features that are characteristic of MDS, including increased proliferation of progenitor cells and impaired differentiation . One of the P95H-mediated splicing alterations in mice reduces the expression of the histone methyl transferase EZH2, an outcome also occurring in human cells expressing mutant SRSF2. Strikingly, restoring expression of EZH2 in SRSF2 mutant mice partially rescues the hematopoietic defect . Overall, these studies provide strong evidence that MDS-associated mutations in SRSF2 affect the production of splice variants involved in chromatin structure that in turn elicit hematopoietic defects.
SF3B1 is a U2 snRNP–associated protein involved in branch point selection . The depletion of SF3B1 impairs the growth and the differentiation of myeloid cell lines . SF3B1 haploinsufficiency in mice compromises the repopulating ability of hematopoietic stem cells, but is not sufficient to induce MDS . Decreasing the levels of SF3B1 in myeloid cell lines alters the alternative splicing of transcripts encoding components involved in apoptosis and cell cycle control. Interestingly, in bone marrow cells and progenitor bone marrow stem cells from SF3B1-mutated MDS patients, the expression and splicing of genes/transcripts associated with mitochondrial and heme-related functions are altered, providing a link with the abnormal iron homeostasis observed in MDS patients . Notably, iron homeostasis influences alternative splicing by modulating the activity of SRSF7 . Interestingly, it has been observed that SRSF7 is itself abnormally spliced in MDS patients carrying the SF3B1 mutation . We speculate that this defective splicing may be responsible, at least in part, for the noted heme deficiency in MDS patients. SF3B1 is also part of a complex with BCLAF1, U2AF, and PRPF8 that is recruited to chromatin-bound BRCA1 to stimulate the splicing of transcripts encoding factors involved in DNA repair and the DDR . In line with this finding, several DNA repair and DDR genes (e.g., ABL1 , BIRC2 , and NUMA1 ) produce aberrantly spliced transcripts in cells of patients with SF3B1 mutations . Interestingly, one splicing alteration in these patient cells occurs in EZH1, a functional homologue of the histone methyl transferase EZH2, which is also defectively spliced in SRSF2-mutated cells and contributes to the MDS phenotype . Notably, the expression and alternative splicing of transcripts encoding RNA-processing factors, including PRPF8 and U2AF2, are also affected in SF3B1-mutated cells . This observation is important because mutations in PRPF8 and U2AF2 are found in MDS patients . However, although PRPF8 mutations are associated with alternative splicing defects , U2AF2 mutations appear neutral . Overall, these results suggest that SF3B1 mutations alter the splicing of transcripts involved in chromatin structure, DNA repair, and the DDR, thereby possibly providing an explanation for the accumulation of DNA damage in hematopoietic progenitor cells of MDS patients . A function for SF3B1 in splice site selection has recently been associated with a specific interaction with histone marks that are enriched in exons . Although SF3B1 mutations often occur in the C-terminal HEAT repeats involved in protein–protein interactions, it remains to be shown whether these mutations affect the recruitment of SF3B1 to chromatin. If they do, combining a mutated SF3B1 with chromatin modification defects may amplify splicing alterations, gradually leading to more detrimental hematopoietic deficiencies.
U2AF1 is the smaller of two proteins that make up the U2AF heterodimer implicated in generic 3′ss recognition. Although the U2AF1-S34F mutation elicits hematopoietic abnormalities in mice that compromise the repopulating ability of stem cells, it does not elicit MDS . Many splicing defects in MDS patients with U2AF1 mutations occur in transcripts that encode components involved in cell cycle and splicing control . Expression of mutated U2AF1 proteins in a human erythroleukemic cell line causes thousands of splicing alterations, including some in transcripts encoding components involved in DNA methylation (e.g., DNMT3B, also affected by mutations in SF3B1), DDR, and apoptosis . Different U2AF1 mutations alter its binding to 3′ss in different ways and lead to distinct yet overlapping splicing defects . A meta-transcriptome analysis using samples from U2AF1-S34F mutant mice, AML patients with U2AF1 mutations, and primary bone marrow cells overexpressing U2AF1-S34F uncovered common splicing alterations in transcripts encoding splicing proteins and components that are mutated in MDS and AML, or that are involved in hematopoietic stem cell function. These observations provide strong support to the view that mutated U2AF1 elicits abnormal hematopoiesis .
ZRSR2 has been implicated in the splicing of introns that use the U12-dependent minor spliceosome in transcripts encoding cancer-relevant proteins such as PTEN, MAPK1, MAPK3, BRAF, and E2F2 . MDS-associated mutations in ZRSR2 are often inactivating, and depleting ZRSR2 reduces the growth and clonogenic potential of leukemia cell lines and alters the differentiation potential of human CD34 + bone marrow cells . Overall, the studies mentioned above suggest that alternative splicing likely makes a crucial contribution to the clinical evolution of MDS . Mutations in SRSF2, U2AF1, and SF3B1 may elicit a shared set of splicing alterations that trigger common hematopoietic defects and predispose stem cells to cancer development. The insight gained by studying the contribution of mutated splicing factors to MDS is likely to benefit our understanding of how mutations in splicing factors lead to cancer in general because mutations in SF3B1, U2AF1, and SRSF2 are also found in a variety of solid tumors . Although a recent compilation indicates that splicing factor genes are frequently mutated in different types of cancer , a more extensive characterization of the functional impact of these mutations will be required to determine whether these alterations preferentially contribute to specific types of cancer.
Although mutated SF3B1 and U2AF1 are expected to impact branch site/3′ss selection directly, we speculate that decreases in the level, and changes in the RNA binding specificity of splicing factors may also cause a second wave of alternative splicing changes through activation of the DDR . This model is based on the observation that when core spliceosomal components are delocalized or when RNA-processing factors such as SRSF1 and RNPS1 are depleted, R loop formation occurs and triggers the DDR to impact alternative splicing . If we are correct, drops in the level or changes in the activity of SRSF2, SF3B1, and U2AF1 may perturb alternative splicing through persistent R loop–mediated activation of the DDR , a consequence that would be consistent with the noted accumulation of DNA damage in MDS progenitor cells . DNA damage affects the expression, modification, and localization of several splicing regulatory proteins . Likewise, DNA damage caused by deficient telomeres in mice alters the expression of splicing regulators . Importantly, alternative splicing defects in MDS patients and MDS mouse models affect variants involved in apoptosis, cell cycle control, DNA repair, splicing control, and chromatin structure , precisely matching the functional categories of transcripts whose alternative splicing is affected by DNA-damaging agents . As drops in the activity of splicing factors are frequently associated with human pathologies, this model may be applicable to a variety of diseases in addition to MDS, including myotonic dystrophies, retinitis pigmentosa, ALS, and FTD.
Cancer metastasis involves cell migration and tissue invasion through reversible transitions from mesenchymal to epithelial cell types (mesenchymal–epithelial and epithelial–mesenchymal transition [MET and EMT, respectively]; ; ). ESRPs and RBFOX2 control the alternative splicing of several transcripts encoding cell adhesion proteins involved in the epithelial or mesenchymal phenotypes . A splice variant of the tyrosine kinase receptor RON that promotes cell migration and activates EMT is controlled by antagonistic interactions involving SRSF1 and hnRNP A1, A2, and H proteins . Likewise, hnRNP M antagonizes ESRP in the splicing of the cell adhesion molecule CD44 and plays a key role in the metastatic behavior of breast cancer cells in mouse models . In contrast, RBM47 behaves as a suppressor of breast cancer progression and metastasis . Consistent with their role in metastasis, the expression of hnRNP M and RBM47 is respectively high and low in aggressive human breast cancer . LIN28A, the expression of which increases in the HER2 breast cancer subtype, interacts with hnRNP A1 to modulate the production of splice variants of ENAH that is associated with breast cancer metastasis . In addition to the lncRNA MALAT1 , which is implicated in metastasis, possibly by controlling alternative splicing , another lncRNA modifies chromatin to prevent the recruitment of a repressive chromatin-splicing adapter complex that normally enforces the mesenchymal-specific splicing of FGFR2 . lncRNAs may act on opposite functional sides of the oncogenic pathway. On the one hand, the lncRNA INXS , which interacts with Sam68 to favor the production of the proapoptotic Bcl-xS splice variant, is down-regulated in tumors and its overexpression in mouse xenograft models elicits tumor regression . On the other hand, the lncRNA FAS-AS1 interacts with RBM5 to reduce expression of the prosurvival soluble FAS variant . Other lncRNAs that have been implicated in cancer include linc-p21 , PANDA , TUG1 , and Pint , but their impact on splicing and their contribution to cancer and metastasis are speculative and need to be investigated in more detail .
The overexpression of MYC contributes to malignant transformation and is associated with many cancers. Several studies have established a role for MYC in splicing control . MYC contributes to cancer metabolism and tumor growth by increasing the levels of splicing regulators PTBP1, hnRNP A1, and hnRNP A2 that shift the production of pyruvate kinase from splice variant PKM1, which drives oxidative phosphorylation, to PKM2, which elicits aerobic glycolysis . In glioblastoma, the up-regulation of hnRNP A1 promotes the splicing of a transcript encoding the MYC-interacting partner Max to generate ΔMax, producing a feed-forward loop that enhances MYC function and hnRNP A1 expression . MYC also stimulates the expression of the SR protein SRSF1, which drives oncogenesis through alternative splicing of a network of transcripts encoding signaling molecules (e.g., RON and MKNK2) and transcription factors (e.g., BIN1; ). SRSF1 also elicits the production of variants, such as CASC4 with antiapoptotic function, as well as MDM2 and cyclin D1 variants with prooncogenic properties . Positive feedback likely occurs because the SRSF1-mediated splice variant BIN1-12a no longer binds to MYC and lacks tumor suppressor activity . KRAS mutations that are frequently found in colorectal cancer activate the MAPK–extracellular signal-regulated kinase pathway to increase the level of the transcription factor ELK1 that in turn increases MYC with the expected impact on the production of PKM2 (Hollander, D., and Ast, G., personal communication). The activated MAPK–extracellular signal-regulated kinase pathway also stimulates the expression of Sam68, which increases the level of SRSF1 through alternative splicing . Interestingly, the expression of SRSF1 is also stimulated by the anticancer drug gemcitabine, producing a splice variant of MKNK2 that phosphorylates eIF4E to promote cell growth and drug resistance . Gemcitabine resistance is also provided by the expression of PKM2 through the increased production of PTBP1 .
The term “oncogene addiction” has been used in the cancer field to describe the increased dependence of cancers on oncogenes for growth and survival . Recent results suggest that there is an analogous hypersensitivity of cancer cells on splicing factors. This relationship was established when it was noted that MYC-regulated genes and pathways provoke a general increase in pre-mRNA synthesis that imposes a strain on generic splicing . The fact that MYC up-regulates enzymes that modify snRNP proteins in cancer cells is consistent with the high demand for spliceosome components . Nevertheless, MYC-driven cancer cells are more sensitive to depletions of spliceosome components such as U2AF1 and SF3B1 . This splicing stress may also affect the production of functionally important splice variants because decreases in the level or activity of generic spliceosome components also affect alternative splicing. Other cancers may be similarly addicted to splicing factors. For example, PRPF6, a component of the tri-snRNP complex, is overexpressed in a subset of primary and metastatic colon cancers, and its depletion by RNAi in cell lines reduces cell growth and decreases the production of the oncogenic ZAK kinase splice variant . Likewise, expression of splicing regulator SRSF10 is increased in aggressive colon cancers. The siRNA-mediated depletion of SRSF10 decreases tumor formation in mice, an effect that is mediated, at least in part, by a drop in the production of the oncogenic splice variant of the splicing factor BCLAF1 . Thus, the overall stimulation in gene expression in cancer cells may increase their reliance on splicing factors, hence providing avenues to explore novel anticancer strategies.
As in cancer, pathogenic mechanisms in neurological and muscle-associated diseases can be caused by mutations in genes that affect splicing of their pre-mRNAs, or by mutations that affect the expression and the activity of splicing factors that control splice site utilization. Excellent reviews have recently presented the prevalence of alternative splicing, the role of RBPs, and the functional diversity of splice variants in neuronal systems . Here, we present recent advances that solidify the links between splicing control and neuronal and muscular pathologies . Identifying functionally relevant variants and changes in the expression/activity of regulators remains challenging, particularly in neuropathologies. This is mainly a result of tissue availability and heterogeneity, as well as difficulties in developing adequate animal models that recapitulate human phenotypes.
Although mutations in the splicing regulatory RBP TDP-43 are found in only a fraction of all cases of ALS and FTD, cytoplasmic inclusions and the nuclear depletion of TDP-43 are hallmarks of these diseases . Decreasing the expression of TDP-43 leads to neuronal defects in mice and affects the alternative splicing of transcripts encoding components important in neuronal development or implicated in neurological diseases . Splicing defects in ALS tissues occur in target TDP-43 transcripts . A recent study in mice indicates that a decrease in TDP-43 impairs splicing fidelity and leads to the aberrant inclusion of cryptic exons, an effect also seen in brain tissues from ALS-FTD patients . Similar to TDP-43, mutations and loss of nuclear function of FUS have been linked to alternative splicing changes in ALS, with a few pre-mRNA targets also regulated by TDP-43 . Cytoplasmic aggregates of mutated FUS or TDP-43 often sequester other splicing proteins, and this may also contribute to alterations in splicing profiles. For example, the ability of FUS to interact with U1 snRNP is likely responsible for the U1 snRNP cytoplasmic mislocalization in FUS-mutated ALS patient fibroblasts . ALS-associated mutations in hnRNP A1/A2 proteins also cause cytoplasmic aggregation . In several ALS-FTD patients, GGGGCC repeat expansion that promotes G-quadruplex formation in the C9ORF72 gene sequester splicing factors such as SRSF2 and hnRNP H, which in turn may promote extensive alternative splicing defects and neurodegeneration . Further studies should clarify whether the pathogenic impact of aggregates is strictly caused by loss of function or whether toxicity associated with aggregate formation also contributes to the clinical manifestation of ALS and FTD.
The deposition of oligomeric β-amyloid peptides and the formation of neurofibrillary tangles associated with the hyperphosphorylation of the microtubule-associated TAU protein have been implicated in AD . ApoE4 status is one of the strongest genetic risk factors, and it possibly affects both β-amyloid and neurofibrillary tangle pathologies. Many genes involved in these pathways, including ApoE4, sustain splicing mutations that have been linked to AD or present profiles of alternative splicing that are altered in AD tissues . RNA sequencing data suggest considerable alternative splicing abnormalities in AD tissues, including in transcripts encoding presenilin-1 and clusterin . Several splicing factors whose expression are misregulated in AD have been identified, including RBFOX, SR, and hnRNP A1 proteins, whereas splicing components, such as the U1 snRNP, appear to be depleted from the nucleus to form cytoplasmic aggregates . Interestingly, a depletion of U1 snRNP components in HEK293 cells disrupts the expression of splice variants encoding the amyloid precursor protein and increases the level of a β-amyloid peptide . HD is caused by expanded CAG repeats in the HTT gene that promote missplicing of its transcripts . The CAG repeats may also sequester splicing factors eliciting alternative splicing defects in other transcripts . Like individuals suffering from FTD, HD subjects display an imbalance in the production of TAU variants that promote deposits. Human HD tissues and a mouse model of HD show alterations in the expression of SRSF6, which may modulate TAU splicing, leading to TAU variants with a greater propensity to form deposits .
SZ is a complex neuronal disease promoting brain dysfunction. A variety of alternative splicing anomalies have been described in the brain or neuronal subtypes of SZ patients, including transcripts encoding a glutamate transporter (EAAT; ) and microcephalin (MCPH1; ). A polymorphism associated with an increased risk of SZ occurs in the dopamine receptor gene DRD2 and affects the ability of the splicing regulator ZRANB2 to control alternative splicing of DRD2 transcripts . The lncRNA gomafu, which is down-regulated in the gray matter from the superior temporal gyrus of SZ patients, is bound by the splicing regulators QKI and SRSF1 to control the alternative splicing of transcripts implicated in SZ . Other lncRNAs have been associated with neuronal stem cell differentiation and the control of alternative splicing through interaction with the neuronal splicing factor PTBP1 . However, although changes in the expression of lncRNAs involved in epigenetic modifications have been linked to neuronal diseases, their contribution to alternative splicing control remains to be examined .
Mutations in, or altered expression of, >100 genes have been linked to ASD . The majority of these genes produce splice variants, and recurrent splicing defects in some of them have been noted in autistic individuals . RBFOX proteins play a critical role in brain development and function , and RBFOX1 haploinsufficiency has been implicated in a variety of neuropsychiatric disorders including ASD . In the mouse brain, the depletion of RBFOX proteins alters the alternative splicing of transcripts implicated in ASD . Identification of a clinically relevant set of splicing events remains challenging because RBFOX proteins affect other pathways in RNA processing and in transcription. Moreover, three highly related RBFOX proteins with partially overlapping functions are expressed in the brain. A recent study has identified a highly dynamic set of microexons (3–15 nucleotides in size) in transcripts of different neurofunctional categories that are misregulated in the brain of autistic individuals. Several neural microexons affect protein–protein interactions that are crucial for neural function, and many are controlled by the splicing regulator nSR100, whose expression is important for normal nervous system development and is reported to be reduced in autistic brain tissues . Neural microexon splicing is also regulated by the PTBP1 and RBFOX proteins that are critical for normal neuronal function . Because microexons have also been linked to SZ and epilepsy, it will be most revealing to characterize the molecular pathways that regulate their inclusion in these neurological disorders.
Mutations that reduce the level of SMN proteins, which are involved in snRNP biogenesis, cause SMA. Although multiple alternative splicing defects have been noted, it remains unclear which splicing abnormalities cause the human phenotypes . As the SMN protein deficiency can be rescued by stimulating exon 7 inclusion in the SMN2 pre-mRNA, efforts deployed to achieve this goal in mouse models have produced encouraging results using oligonucleotides that block the activity of an intron splicing silencer or small molecules that stimulate exon 7 inclusion with apparent high specificity .
Mutations that truncate the sarcomeric protein titin cause dilated cardiomyopathy . A loss-of-function mutation in RBM20 affects the alternative splicing of titin , causing dilated cardiomyopathy . Hypoxic conditions associated with cardiac hypertrophy activate the expression of SF3B1, which in turn induces the production of a splice variant of ketohexokinase associated with contractile dysfunction .
Today, it is very clear that cells derived from patients with a variety of diseases display splicing defects, with studies relating to cancer and neuropathologies being the most prevalent. These splicing alterations may generate recognizable signatures that can guide diagnostics and may lead to the identification of new therapeutic targets. This important cataloguing effort is now increasing through genome-wide studies that exploit affordable RNA sequencing technologies and access to sequence repositories. Bioinformatic resources designed to interrogate these data are also expanding and are becoming widely available ( ; ; Hollander, D., and Ast, G., personal communication). The reliable identification of targets that support actionable therapeutic approaches is challenged by the fact that correlations are often derived from heterogeneous clinical samples. Moreover, although documentation of the functional impact of splice variants is accumulating , the causal contribution of disease-associated splice variants to the disease remains unknown in most cases. The functional assessment of a continuously expanding list of splice variants is an experimentally daunting task, possibly explaining why recent studies have restricted their analysis to mRNA variants encoding proteins with known distinct activities or with premature stop codons that decrease protein production. To understand the molecular mechanisms that lead to splicing alterations, it will be important to (a) assess the expression, posttranslational modifications, or mutations of splicing regulators and chromatin-modifying components; (b) profile the binding sites of the putative regulatory RBP on target pre-mRNAs in relevant tissue cells as was originally done for NOVA, whose inactivation causes paraneoplastic neurological disorders ; and (c) sequence the genome of diseased and normal tissues for each patient to identity somatic mutations that may contribute to splicing alterations. This comparison is especially relevant to cancer in which genomes are often intrinsically unstable. Moreover, in light of the model proposed earlier, defects in the activity or levels of splicing factors may lead to R loop–mediated mutations that may have a permanent impact on alternative splicing. To accommodate the analyses of this vast quantity of data, robust computational methods are being developed to link the production of recurrent variants with changes in RBPs . Alternatively, combining large-scale collections of molecular interaction datasets (protein–DNA, protein–RNA, and protein–protein) with cancer transcriptome datasets may reveal regulatory pathways relevant to cancer (Hollander, D., and Ast, G., personal communication). Putative connections can then be validated experimentally or confirmed, for example by using The Cancer Genome Atlas. These emerging procedures justify the usefulness of network-based approaches to capture molecular relationships across different regulatory layers that become compromised or that emerge during diseases.
|
Oro‐Dental Characteristics in Patients With Adult‐Onset Hypophosphatasia Compared to a Healthy Control Group–A Case‐Control Study | a866b2b1-0e22-483f-b2eb-353dd123ee28 | 11680502 | Dentistry[mh] | Background Hypophosphatasia (HPP) is a rare inherited metabolic disease that can affect oral and dental health . The disease can be autosomal dominantly or autosomal recessively inherited and is caused by pathogenic variant(s) in the ALPL gene . This gene encodes for the tissue nonspecific alkaline phosphatase (TNSALP), which is important for bone and tooth mineralisation . In patients with HPP, TNSALP activity is persistently reduced, leading to pathological mineralisation of hard tissue and elevation of TNSALP substrates . HPP is a heterogeneous disease with a wide range of severity and various clinical manifestations . Clinical symptoms can include recurrent fractures, dental problems, reduced physical activity and chronic pain in muscles and bones . Previous research has suggested diagnostic criteria for HPP in adults and children . Patients diagnosed in adulthood (aHPP) are often mildly affected and heterozygous for a pathogenic variant in ALPL , while patients diagnosed in childhood (paediatric onset) typically show more severe symptoms and often are compound heterozygous for pathogenic variants in ALPL . Genetic studies indicate that the prevalence of the mild forms of HPP often diagnosed in adults is up to 1:508, which indicates that the disease is likely unrecognised and underdiagnosed . Various oral and dental manifestations have previously been reported in patients with HPP including premature loss of primary teeth, premature loss of permanent teeth and marginal bone loss . The premature loss of teeth has for many years been attributed to disturbed cementum formation . In addition, ankylosis, tooth agenesis, late eruption of primary and permanent dentition and impaction have also been found in such patients. Furthermore, small and bulbous crowns, cervical constriction, enamel and dentine hypoplasia, enamel hypomineralisation, increased occurrence of dental caries and increased tooth wear and fractures have been reported in patients with HPP. Even research without specific orthodontic focus have described underdevelopment of the alveolar process and malocclusions such as open bite, crowding and anterior or posterior crossbite in such patients . We hypothesise that patients with aHPP have altered oro‐dental characteristics due to pathological mineralisation of hard tissue, including higher presence of altered tooth and root morphology, opacities, tooth wear and fractures as well as increased bone loss. Even though the relation between HPP and affection of hard tissue as teeth has been established decades ago , oral and dental characteristics in patients with aHPP have so far only been described in case reports, case series and family studies, primarily focusing on the primary dentition . No previous studies have thus described oral and dental characteristics in patients with aHPP systematically . The aim of the present study is to investigate oral and dental characteristics in patients with aHPP compared to a group of healthy controls.
Materials and Methods 2.1 Study Design and Participants This case–control study was performed at the Resource Centre for Rare Oral Diseases, Copenhagen University Hospital, Rigshospitalet, Copenhagen, between September 2022 and June 2023. The present study was conducted according to a standardised protocol validated in previous studies in patients with Ehlers‐Danlos syndrome . Power calculation was performed prior to the study based on a previous study which reported that 8% of healthy adults have dental enamel defects . If 50% of patients with HPP have dental enamel defects with a risk of type 1 error of 5% and type 2 error of 20% and an 80% chance of detecting a difference between the two groups, the power calculation shows that approximately 18 participants are needed in each group to detect a significant difference. A total of 46 patients with HPP were recruited from a previous study, which was conducted between September 2017 and February 2020 . All adults with HPP were diagnosed due to biochemical and clinical features of HPP . In addition, the diagnosis HPP was verified by genetic testing. None of the participants were diagnosed alone due to dental signs. Biochemical, clinical as well as genetic characteristics of this cohort are described by Hepp et al. . Patients with aHPP and an age between 18 and 80 years were included in the present study. Pregnant patients with aHPP were excluded. The healthy controls were recruited by advertisement in dental clinics, the Department of Odontology and online. The inclusion criteria for healthy controls were no known diseases or syndromes, age ranging from 18 to 80 years, at least 24 teeth, neutral occlusion and no previous orthodontic treatment. Healthy controls were excluded if they had presence of multiple degraded, untreated clinical crowns (caries or fracture), sleep disorders, pregnancy as well as familial predisposition to HPP or rickets (Figure ). All participants were interviewed and examined by one examiner (FJ) under supervision of XH and LS using standard and validated methods . The study was approved by the Danish National Committee on Health Research Ethics (Protocol H‐22008426) and the Danish Data Protection Agency (514‐0739/22‐3000). All participants provided informed consent to participate in the study. Furthermore, the study protocol was established in accordance with the guidelines of the Declaration of Helsinki. 2.2 Interview and Clinical Examination The interview included questions about tendency to spontaneous fractures of the teeth, dental caries, agenesis and early loss of primary and permanent teeth. Each question had answering alternatives of ‘yes’, ‘no’ or ‘don't know’. The clinical examination comprised an assessment of the dentition, caries experience, oral hygiene and oral mucosa . The following was registered: teeth present , tooth fractures, attrition with dentin exposure and enamel hypoplasia or opacities according to the developmental defects of enamel (DDE) index , mucosal bruising or ulceration and the presence of lingual and inferior labial frenulum and oral hygiene index (OHI) . A tooth was registered as present and scored ‘1’ when a part of the tooth had penetrated the mucosa and registered as not present and scored as ‘0’ when the tooth was either not erupted, extracted or more than two‐thirds of the tooth surface was completely decayed or fractured . The presence of a crown fracture, attrition with dentin exposure, enamel hypoplasia (pits, grooves or areas) or enamel opacity (white, yellow or brown) on the erupted portion of the tooth and the presence of mucosal bruising or ulceration was scored as present ‘1’ or not present ‘0’ . Further, the presence of lingual and inferior labial frenulum was scored as present ‘0’ or not present ‘1’ . The OHI was calculated as the total debris score divided by the number of surfaces scored. No plaque was scored as ‘0’, plaque covering < 1/3 of the tooth surface was scored as ‘1’, plaque covering between > 1/3 and < 2/3 of the tooth surface was scored as ‘2’ and plaque covering > 2/3 of the tooth surface was scored as ‘3’ . 2.3 Radiographic Examination The radiographic examination included a panoramic radiograph (OP) and a cone‐beam computed tomography (CBCT) scan, both recorded at the Cephalometric laboratory, Department of Odontology, Copenhagen University by the same radiologist. The OPs were obtained in a ProMax 2D (Panoramic Xray Unit, Planmeca Oy, Helsinki, Finland) and used to record the number of teeth, tooth agenesis, supernumerary teeth, impacted teeth, decayed, missing and filled teeth (DMFT value) and root‐filled teeth. In addition, the presence of deviation in enamel radiolucency, deviation in crown morphology, taurodontism, gracile roots, deviation in root morphology, pulp stones/denticles and pulp obliterations were registered. The presence was scored as ‘1’, and the absence was scored as ‘0’ . The CBCT scans were obtained in a ProMax 3D Max (Planmeca Oy, Helsinki, Finland, serial number 509S05‐0703) with the following settings: 96 kV, 5 mA, exposure time of 9.020–9.113 s, image size of 575 × 575 × 433 and voxel size of 400 μm. The results were saved as Digital Imaging and Communication in Medicine (DICOM) format and imported to the Planmeca Romexis Viewer (5.3.5.80) computer programme, which was used to create 3D images in the Explorer 3D sub‐module, where further analysis was performed. The 3D image registration was performed to achieve optimal visualisation of the selected registration of teeth in the coronal, sagittal and axial views . The CBCT scans were used to evaluate the marginal bone level, crown height and root length on the first molars and central incisors in both jaws (Figure ). The marginal bone level was measured as the distance between the cement–enamel junction (CEJ) and the most apical part of the bone level at the mesial, distal, lingual/palatinal and buccal aspect of the tooth in the coronal and sagittal view. The crown height was defined as the distance from the incisal edge of the incisors or the buccal cusp tip of the molars perpendicular to the line connecting the most mesial and distal CEJ in the coronal view. The root length was measured as the distance between the most apical point of the root perpendicular to the line connecting the most mesial and distal CEJ in the coronal view. The crown/root ratio was subsequently calculated by dividing the crown height by the root length . Regarding measurement of the marginal bone level, a small value indicates that there was no bone loss, whereas a larger value indicates that bone loss had occurred. All parameters were measured in millimetres. 2.4 Reliability Following the calibration of XH and FJ using 15 randomly selected OPs and CBCTs, the inter‐ and intra‐observer agreement was assessed on 25 randomly selected OPs and CBCTs. XH and FJ performed the inter‐observer registrations, while FJ conducted the intra‐observer registrations by repeating the measurements and registrations at a 1‐month interval. Regarding the registrations on the OPs, no systematic error was found and the inter‐ and intra‐observer agreement for the registrations on the OPs was κ = 0.77–1 and κ = 0.88–1, respectively. For the measurements on CBCT, no systematic error was found and the method error according to Dahlberg formula was 0.09–0.71 mm and the Houston reliability coefficient was 0.66–0.99. The clinical recordings including presence of teeth, crown fracture, attrition with dentin exposure, enamel hypoplasia and enamel opacity were re‐assessed on clinical photos by two of the authors. If any doubt, the tooth was registered as no deviation. 2.5 Statistical Analysis The statistical analyses were performed in SPSS (IBM, version 28.0), and the level of significance was set to 5%. The categorical data was analysed using Fisher's exact test for 2 × 2 tables and Fisher–Freeman–Halton Exact test for tables larger than 2 × 2. Subsequently, multiple logistic regression was performed for all the statistically significant categorical variables to adjust for age and gender. For statistically significant variables where ‘0’ was included in the dataset, multiple logistic regression was not performed. The normality of the continuous data was determined by assessing Q‐Q plots and Shapiro–Wilk test. Subsequently, data was analysed using t‐test on the normally distributed data and Wilcoxon rank sum test on non‐normally distributed data. Multiple linear regression was then performed on the statistically significant continuous variables to adjust for age and gender. In addition, dentition, DMFT and marginal bone level were also adjusted for OHI using backwards elimination .
Study Design and Participants This case–control study was performed at the Resource Centre for Rare Oral Diseases, Copenhagen University Hospital, Rigshospitalet, Copenhagen, between September 2022 and June 2023. The present study was conducted according to a standardised protocol validated in previous studies in patients with Ehlers‐Danlos syndrome . Power calculation was performed prior to the study based on a previous study which reported that 8% of healthy adults have dental enamel defects . If 50% of patients with HPP have dental enamel defects with a risk of type 1 error of 5% and type 2 error of 20% and an 80% chance of detecting a difference between the two groups, the power calculation shows that approximately 18 participants are needed in each group to detect a significant difference. A total of 46 patients with HPP were recruited from a previous study, which was conducted between September 2017 and February 2020 . All adults with HPP were diagnosed due to biochemical and clinical features of HPP . In addition, the diagnosis HPP was verified by genetic testing. None of the participants were diagnosed alone due to dental signs. Biochemical, clinical as well as genetic characteristics of this cohort are described by Hepp et al. . Patients with aHPP and an age between 18 and 80 years were included in the present study. Pregnant patients with aHPP were excluded. The healthy controls were recruited by advertisement in dental clinics, the Department of Odontology and online. The inclusion criteria for healthy controls were no known diseases or syndromes, age ranging from 18 to 80 years, at least 24 teeth, neutral occlusion and no previous orthodontic treatment. Healthy controls were excluded if they had presence of multiple degraded, untreated clinical crowns (caries or fracture), sleep disorders, pregnancy as well as familial predisposition to HPP or rickets (Figure ). All participants were interviewed and examined by one examiner (FJ) under supervision of XH and LS using standard and validated methods . The study was approved by the Danish National Committee on Health Research Ethics (Protocol H‐22008426) and the Danish Data Protection Agency (514‐0739/22‐3000). All participants provided informed consent to participate in the study. Furthermore, the study protocol was established in accordance with the guidelines of the Declaration of Helsinki.
Interview and Clinical Examination The interview included questions about tendency to spontaneous fractures of the teeth, dental caries, agenesis and early loss of primary and permanent teeth. Each question had answering alternatives of ‘yes’, ‘no’ or ‘don't know’. The clinical examination comprised an assessment of the dentition, caries experience, oral hygiene and oral mucosa . The following was registered: teeth present , tooth fractures, attrition with dentin exposure and enamel hypoplasia or opacities according to the developmental defects of enamel (DDE) index , mucosal bruising or ulceration and the presence of lingual and inferior labial frenulum and oral hygiene index (OHI) . A tooth was registered as present and scored ‘1’ when a part of the tooth had penetrated the mucosa and registered as not present and scored as ‘0’ when the tooth was either not erupted, extracted or more than two‐thirds of the tooth surface was completely decayed or fractured . The presence of a crown fracture, attrition with dentin exposure, enamel hypoplasia (pits, grooves or areas) or enamel opacity (white, yellow or brown) on the erupted portion of the tooth and the presence of mucosal bruising or ulceration was scored as present ‘1’ or not present ‘0’ . Further, the presence of lingual and inferior labial frenulum was scored as present ‘0’ or not present ‘1’ . The OHI was calculated as the total debris score divided by the number of surfaces scored. No plaque was scored as ‘0’, plaque covering < 1/3 of the tooth surface was scored as ‘1’, plaque covering between > 1/3 and < 2/3 of the tooth surface was scored as ‘2’ and plaque covering > 2/3 of the tooth surface was scored as ‘3’ .
Radiographic Examination The radiographic examination included a panoramic radiograph (OP) and a cone‐beam computed tomography (CBCT) scan, both recorded at the Cephalometric laboratory, Department of Odontology, Copenhagen University by the same radiologist. The OPs were obtained in a ProMax 2D (Panoramic Xray Unit, Planmeca Oy, Helsinki, Finland) and used to record the number of teeth, tooth agenesis, supernumerary teeth, impacted teeth, decayed, missing and filled teeth (DMFT value) and root‐filled teeth. In addition, the presence of deviation in enamel radiolucency, deviation in crown morphology, taurodontism, gracile roots, deviation in root morphology, pulp stones/denticles and pulp obliterations were registered. The presence was scored as ‘1’, and the absence was scored as ‘0’ . The CBCT scans were obtained in a ProMax 3D Max (Planmeca Oy, Helsinki, Finland, serial number 509S05‐0703) with the following settings: 96 kV, 5 mA, exposure time of 9.020–9.113 s, image size of 575 × 575 × 433 and voxel size of 400 μm. The results were saved as Digital Imaging and Communication in Medicine (DICOM) format and imported to the Planmeca Romexis Viewer (5.3.5.80) computer programme, which was used to create 3D images in the Explorer 3D sub‐module, where further analysis was performed. The 3D image registration was performed to achieve optimal visualisation of the selected registration of teeth in the coronal, sagittal and axial views . The CBCT scans were used to evaluate the marginal bone level, crown height and root length on the first molars and central incisors in both jaws (Figure ). The marginal bone level was measured as the distance between the cement–enamel junction (CEJ) and the most apical part of the bone level at the mesial, distal, lingual/palatinal and buccal aspect of the tooth in the coronal and sagittal view. The crown height was defined as the distance from the incisal edge of the incisors or the buccal cusp tip of the molars perpendicular to the line connecting the most mesial and distal CEJ in the coronal view. The root length was measured as the distance between the most apical point of the root perpendicular to the line connecting the most mesial and distal CEJ in the coronal view. The crown/root ratio was subsequently calculated by dividing the crown height by the root length . Regarding measurement of the marginal bone level, a small value indicates that there was no bone loss, whereas a larger value indicates that bone loss had occurred. All parameters were measured in millimetres.
Reliability Following the calibration of XH and FJ using 15 randomly selected OPs and CBCTs, the inter‐ and intra‐observer agreement was assessed on 25 randomly selected OPs and CBCTs. XH and FJ performed the inter‐observer registrations, while FJ conducted the intra‐observer registrations by repeating the measurements and registrations at a 1‐month interval. Regarding the registrations on the OPs, no systematic error was found and the inter‐ and intra‐observer agreement for the registrations on the OPs was κ = 0.77–1 and κ = 0.88–1, respectively. For the measurements on CBCT, no systematic error was found and the method error according to Dahlberg formula was 0.09–0.71 mm and the Houston reliability coefficient was 0.66–0.99. The clinical recordings including presence of teeth, crown fracture, attrition with dentin exposure, enamel hypoplasia and enamel opacity were re‐assessed on clinical photos by two of the authors. If any doubt, the tooth was registered as no deviation.
Statistical Analysis The statistical analyses were performed in SPSS (IBM, version 28.0), and the level of significance was set to 5%. The categorical data was analysed using Fisher's exact test for 2 × 2 tables and Fisher–Freeman–Halton Exact test for tables larger than 2 × 2. Subsequently, multiple logistic regression was performed for all the statistically significant categorical variables to adjust for age and gender. For statistically significant variables where ‘0’ was included in the dataset, multiple logistic regression was not performed. The normality of the continuous data was determined by assessing Q‐Q plots and Shapiro–Wilk test. Subsequently, data was analysed using t‐test on the normally distributed data and Wilcoxon rank sum test on non‐normally distributed data. Multiple linear regression was then performed on the statistically significant continuous variables to adjust for age and gender. In addition, dentition, DMFT and marginal bone level were also adjusted for OHI using backwards elimination .
Results 3.1 Study Population A total of 51 participants, 20 patients with aHPP and 31 healthy controls, were included in the study (Figure ). Patients with aHPP (4 men and 16 women) had a mean age of 53.10 ± 12.45 years (age range: 24–74 years). The 31 healthy controls consisted of 4 men and 27 women with a mean age of 48.61 ± 13.30 years (age range: 22–71 years) (Figure ). No statistically significant differences in age and gender were found between the groups. Only significant results adjusted for age and gender are described in the results section. However, significant values without adjustment for age and gender are also reported in cases where multiple logistic regression was not possible. 3.2 Interview and Clinical Examination The results from the interviews revealed that a significantly higher number of patients with aHPP have experienced tooth fractures, caries in permanent teeth and early loss of permanent teeth compared to healthy controls. Results from the interviews are presented in Table . The clinical examination showed that the presence of 28, 48 and 46 and attrition of 11 were significantly lower in patients with aHPP compared to healthy controls (Table ). In addition, the presence of 14, 16, 24, 26 and 27 was significantly lower and the opacity of 31, 33, 43 and 44 was significantly higher in patients with aHPP than in healthy controls (Table ). No significant difference was found in OHI between patients with aHPP and healthy controls. 3.3 OP Examination Results from OP examination are shown in Tables and . Patients with aHPP had significantly lower presence of 28, 38, 46 and 48 and a lower number of teeth. Furthermore, the presence of denticles was significantly higher in patients with aHPP compared to healthy controls (Tables and ). In addition, the presence of 14, 16, 24, 26 and 27 was significantly lower in patients with aHPP than in healthy controls (Table ). 3.4 CBCT Examination The distance between CEJ and the marginal bone level buccal and palatinal for 11, mesial, distal, buccal and palatinal for 21, distal for 26 and mesial and buccal for 46 were significantly higher in patients with aHPP than in healthy controls (Table ). Moreover, patients with aHPP had significantly higher crown height for 11 than healthy controls (Table ).
Study Population A total of 51 participants, 20 patients with aHPP and 31 healthy controls, were included in the study (Figure ). Patients with aHPP (4 men and 16 women) had a mean age of 53.10 ± 12.45 years (age range: 24–74 years). The 31 healthy controls consisted of 4 men and 27 women with a mean age of 48.61 ± 13.30 years (age range: 22–71 years) (Figure ). No statistically significant differences in age and gender were found between the groups. Only significant results adjusted for age and gender are described in the results section. However, significant values without adjustment for age and gender are also reported in cases where multiple logistic regression was not possible.
Interview and Clinical Examination The results from the interviews revealed that a significantly higher number of patients with aHPP have experienced tooth fractures, caries in permanent teeth and early loss of permanent teeth compared to healthy controls. Results from the interviews are presented in Table . The clinical examination showed that the presence of 28, 48 and 46 and attrition of 11 were significantly lower in patients with aHPP compared to healthy controls (Table ). In addition, the presence of 14, 16, 24, 26 and 27 was significantly lower and the opacity of 31, 33, 43 and 44 was significantly higher in patients with aHPP than in healthy controls (Table ). No significant difference was found in OHI between patients with aHPP and healthy controls.
OP Examination Results from OP examination are shown in Tables and . Patients with aHPP had significantly lower presence of 28, 38, 46 and 48 and a lower number of teeth. Furthermore, the presence of denticles was significantly higher in patients with aHPP compared to healthy controls (Tables and ). In addition, the presence of 14, 16, 24, 26 and 27 was significantly lower in patients with aHPP than in healthy controls (Table ).
CBCT Examination The distance between CEJ and the marginal bone level buccal and palatinal for 11, mesial, distal, buccal and palatinal for 21, distal for 26 and mesial and buccal for 46 were significantly higher in patients with aHPP than in healthy controls (Table ). Moreover, patients with aHPP had significantly higher crown height for 11 than healthy controls (Table ).
Discussion To our knowledge this is the first study investigating oro‐dental manifestations in patients with aHPP compared to a group of healthy controls. In the present study, patients with aHPP had a subjective experience of poorer dental health, which was not consistent with all of the objective findings. Objective findings in patients with aHPP included lower presence of specific teeth and lower prevalence of permanent teeth, higher opacity of a few teeth, higher presence of denticles and greater marginal bone loss at specific sites. Tooth fractures have previously been described in a single patient with HPP in a family study and have not previously been investigated systematically. In the present study, a significantly higher number of patients with aHPP reported experiencing dental fractures, which could not be confirmed in the clinical examination. The higher subjective experience of tooth fractures in patients with aHPP may be associated with a subjective feeling of having ‘fragile teeth’ due to reduced mineralisation of enamel and/or dentin . In addition, patients are possibly more aware of their dental health after receiving the diagnosis HPP, which may also lead to a subjective overinterpretation of dental problems. Previous studies have hypothesised an association between tooth fractures/crackled teeth and reduced mineralisation of enamel and/or dentin in patients with HPP , but in more severely affected HPP patients than in the present study. The disagreement between the studies may be explained by the milder phenotype seen in patients with aHPP compared to patients with paediatric‐onset HPP. Premature loss of particularly primary incisors has previously been described as a cardinal symptom of HPP 10 , which is in disagreement with the present study where loss of primary teeth did not differ between the groups. It may be because it is difficult to remember what happened during early childhood when you are an adult (20% answered ‘don't know’ in the interview) or because milder phenotypes among patients with aHPP (compared with patients with paediatric‐onset HPP) may have contributed to minor changes to the periodontium, which may be the cause for not having found a premature loss of primary teeth in aHPP patients at a higher level than controls in the present study. In comparison, a significantly higher number of patients with aHPP reported early loss of permanent teeth in the present study. In addition, data from the clinical examination showed that the presence of specific teeth and the prevalence of permanent teeth was significantly lower in patients with aHPP, which is in agreement with previous studies . It is hypothesised that the early tooth loss in patients with HPP may be caused by periodontal degradation or loss of alveolar bone . Furthermore a histological study of patients with HPP has showed a defect in the root cementum, which may cause that Sharpey's fibres of the periodontal ligament fail to connect to the tooth root resulting in tooth loss . In the present study, the subjective experience of caries in permanent teeth was significantly higher in patients with aHPP, but this was not verified by the DMFT score, as no significant difference was found between the groups. The results indicate that patients with aHPP had a greater subjective sense of their caries activity compared with the objective caries activity. However, this study did not include a clinical examination of caries activity, and OP and CBCT were recorded instead of bitewing X‐rays, thus the conditions for diagnosing caries were not optimal . Caries could be a possible cause of missing teeth in patients with HPP, as caries has previously been described in patients with HPP, although mainly in case reports and review articles . However, these studies have not included information of oral hygiene and plaque levels, which is essential since caries is dependent on the presence of bacterial flora (plaque) . In the present study, DMFT was adjusted for OHI. In the present study, there was no significant difference in OHI between patients with aHPP and healthy controls. High OHI is usually associated with an increased risk of periodontitis and caries . The differences in the number of teeth present and dental disease between the groups may therefore not be explained by differences in OHI. In addition, patients with aHPP generally reported frequent dental visits due to knowledge of an increased risk of periodontal disease in HPP. This may also explain the good oral hygiene among patients with aHPP that was observed in the present study. Interestingly, patients with aHPP had significantly less attrition of 11 in the present study. It was expected that patients with HPP would be more susceptible to wear due to empirical evidence and reduced activity of TNALP in HPP, which may lead to less mineralised enamel and dentin. In the present study, questions regarding tooth grinding were not included. Thus, less attrition of 11 may be caused by a potential difference in the amount and pattern of grinding between the groups but that may not explain why less attrition only appeared on a single tooth. On the other hand, the significantly higher prevalence of opacities on a few teeth in patients with aHPP in the present study suggest that there is a disturbance in the mineralisation of the dental hard tissues due to reduced activity of TNSALP in HPP . Thus, further investigation of the dental hard tissues is needed since the opacities could be related to other causes. To our knowledge, marginal bone level, crown height and root length have not previously been assessed on CBCTs in patients with HPP. The marginal bone loss was significantly greater at specific sites in patients with aHPP than in healthy controls. As the result was adjusted for OHI and there was no significant difference in OHI between the aHPP group and healthy controls, the marginal bone loss could not be explained by poor oral hygiene in the aHPP patients in the present study. Greater bone loss increases the risk of tooth loss and may possibly explain the tooth loss reported in the questionnaire survey, the clinical examination and the examination in the OP. Histological studies on teeth from patients with HPP are needed to discover the histological explanation for the bone loss . Patients with aHPP had a significantly higher crown height of 11, which was unexpected. The higher crown height of 11 may perhaps be related to less attrition of 11. On the other hand, some patients with aHPP had difficulties standing still in the CBCT machine, which resulted in artefacts on the CBCT scans, making the accuracy of some of the measurements questionable. Furthermore, participants from both the aHPP and healthy control group had restored dentition, which resulted in artefacts on the CBCTs, which may complicate interpretation of the CBCT measurements. The control group was on average younger than the aHPP group with a difference of 4.49 years. This may have an impact on the results, as there is generally a correlation between higher age and greater bone loss and higher age and higher DMFT . In addition, there was a different gender distribution with 87.1% women in the control group and 80% women in the aHPP group. Based on these limitations, the results were adjusted for age and gender. No systemic diseases that could affect the results were found in the two groups. A clinical periodontal examination was not performed and therefore it cannot be excluded that the bone loss in the aHPP group was due to periodontal inflammation. The results of the present study may improve knowledge about dental and oral manifestations in patients with aHPP. Although HPP is classified as a rare disease , dentists may meet patients with HPP in their everyday clinical practice. Dentists may also be the first to observe symptoms of HPP in patients and refer the patient for examination by their own doctor or specialists . Therefore, the awareness and knowledge of oro‐dental manifestations in HPP are essential to improve diagnostics as well as to provide supportive dental care.
Conclusion Patients with aHPP have a subjective experience of poor dental health, which was not always in accordance with the objective findings. Loss of permanent teeth, less attrition, tooth opacities, denticles and larger distance between CEJ and marginal bone level are possible oro‐dental findings in patients with aHPP. Future studies investigating the histological characteristics of teeth in patients with HPP are required to increase knowledge of the impact on the dental hard tissues in aHPP patients. The results of the present study contribute to a more detailed understanding of dental and oral manifestations in patients with aHPP and may thus prove valuable in dental care of aHPP patients and thereby delay tooth damage and tooth loss. In addition, the medical community will be able to utilise the new knowledge, as diagnostics of adults with the mild form of HPP is difficult and complex. For patients and dentists, it is also of great importance to be able to use the new knowledge to provide more preventive and prophylactic care. Furthermore, craniofacial and orthodontic findings can be of interest and additional studies on this matter may be published in the future.
Freja Fribert Jørgensen has contributed to acquisition of data, analysis and interpretation of data; has been involved in drafting the manuscript and revising it critically for important intellectual content; has given final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Xenia Hermann has made substantial contributions to conception and design, acquisition of data and interpretation of data; has been involved in revising the manuscript critically for important intellectual content; has given final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Nicola Hepp has made substantial contributions to conception and design and acquisition of data; has been involved in revising the manuscript critically for important intellectual content; has given final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Liselotte Sonnesen has made substantial contributions to conception and design, acquisition of data and interpretation of data; has been involved in revising the manuscript critically for important intellectual content; has given final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
The authors declare no conflicts of interest.
|
Microbial competition for phosphorus limits the CO | 92c1693f-98b4-42f0-b2f4-2ae3c5896cb7 | 11186757 | Microbiology[mh] | Phosphorus is an essential macronutrient underpinning all life on Earth . P deficiency often limits plant metabolism and growth , thereby imposing a crucial potential constraint on the capacity for terrestrial ecosystems to assimilate additional C under increasing atmospheric CO 2 concentrations , . The classic theory of pedogenesis indicates that soil P availability declines over geological timescales due to weathering . Similarly, theories of natural succession posit that long-term ecosystem development can concentrate a great proportion of the available P into the slow-turnover pools such as wood and soil organic matter , resulting in a decreasing proportion of P being actively recycled within the ecosystem. Thus, vegetation productivity tends to decline as natural ecosystems age , . Furthermore, as atmospheric nitrogen (N) deposition continues to augment soil N loading, ecosystems originally subject to N limitation may progressively become more limited by P availability . Thus, P limitation is widespread , , and is estimated to affect one-third to half of all terrestrial vegetation , including many tropical and subtropical forests, as well as woodlands of typically ancient soils of Australia , , . Additional C uptake by trees in forests around the world dominates the global land C sink , with CO 2 fertilization suspected to be the major driver , but there is still large uncertainty about future constraints on additional C sequestration imposed by limited soil nutrient availability , . In particular, few studies have directly addressed the role of ecosystem P cycling as a control on extra C assimilation and growth under future levels of atmospheric CO 2 for forests representative of P-depleted landscapes of the tropics and subtropics. Ecosystem models that incorporate P-cycle processes have generally predicted lower CO 2 fertilization effects on forest growth under P limitation , consistent with the findings of manipulative experiments with potted seedlings that low P availability attenuates plant responses to elevated CO 2 (eCO 2 ) . Plants may have some plasticity to become more efficient in using P to support growth, or more effective in acquiring P to allow extra C sequestration in their biomass under eCO 2 conditions . However, plants may converge towards more conservative P-use strategies (such as high nutrient-resorption efficiency) as P limitation increases over time , . Thus, for natural forests subject to long-term soil development and succession, a key question is the degree to which plant plasticity may accommodate future eCO 2 -induced increases in plant nutrient demand . Adequately addressing this question requires direct field-based evidence of ecosystem cycling and vegetation uptake of P by such forest systems under elevated CO 2 . The limited available evidence suggests that mature trees in non-aggrading (that is, steady-state or degrading) forests may not grow faster under eCO 2 (refs. – ), with P limitation providing a possible explanation , . Data from the Eucalyptus Free Air CO 2 Enrichment (EucFACE) experiment, an evergreen mature forest growing on low-P soils (Extended Data Fig. ), showed increased photosynthesis but no additional tree growth in the first 4 years of eCO 2 exposure , . Concurrently, it was found that eCO 2 did not significantly alter canopy leaf and stem P resorption or C:P stoichiometry , whereas eCO 2 increased P concentrations in the fine roots . The additional C uptake through photosynthesis in turn led to a possible enhanced belowground C allocation through exudates . A possible interpretation of the elevated root exudate activity is that it is part of the plant’s strategy to stimulate soil microbial activity , and, indeed, it was associated with an ephemeral increase in net mineralization of P . However, it was not clear whether this potential exchange of plant C for nutrients led to additional plant P uptake, which would potentially provide a route towards enhanced long-term C sequestration under eCO 2 . A crucial knowledge gap therefore emerged regarding how different ecosystem components interact to constrain the rate of P cycling, plant P uptake and growth response to eCO 2 . A comprehensive assessment of the ecosystem P cycle encompassing its key biological components and biogeochemical compartments can shed light on this question. Here we present an ecosystem-scale P budget for EucFACE based on data collected over the first 6 years of CO 2 enrichment (2013–2018; Fig. ). The EucFACE ecosystem may be considered to be broadly representative of P-limited forests globally in terms of plant-available soil P concentrations, leaf nutrient concentrations, and the sizes of P pools in plants and soils (Extended Data Fig. and Supplementary Information ). The results from this experiment may therefore provide important insights into the functioning of forests globally. Our P budget covers all major components of the ecosystem, including concentrations (Extended Data Fig. ), pools and fluxes connecting overstorey trees, understorey grasses, soil microorganisms, and soil organic and inorganic matter (Fig. ), as well as associated C:P ratios (Extended Data Fig. ). With the assembled P budget and the previous experimental evidence gathered from EucFACE , , , and elsewhere , we tested the following working hypotheses: (1) a large proportion of P would be sequestered in the slow-turnover woody and soil organic matter pools due to long-term ecosystem development and succession , whereas only a small fraction of P in the ecosystem would be recycled to meet the annual plant nutrient demand; and (2) the additional belowground C investment under eCO 2 (ref. ) would enhance soil P availability and therefore stimulate extra plant P uptake.
Our P budget provides direct field-based evidence to support hypothesis 1 that a large proportion of P was sequestered in the slow-turnover live woody and soil organic matter pools (soil P pool of 31.8 ± 5.7 g P per m 2 for the top 60 cm depth versus plant P pool of 1.60 ± 0.08 g P per m 2 ; mean ± s.d. of ambient plots; n = 3; Fig. ), whereas only a small fraction of P was recycled in the ecosystem to support annual plant nutrient demand (0.71 ± 0.01 g P per m 2 per year; Fig. ). In soils, most of the P was present in organic rather than inorganic pools (25.1 ± 4.8 and 6.7 ± 1.16 g P per m 2 , respectively; Fig. ). Soil microorganisms contained a sizable amount of P (5.97 ± 1.43 g P per m 2 ; Fig. ), representing 24% of the soil organic P pool, which is at the top end of such values from a global dataset (median, 7.2%; mean, 11.6%; Extended Data Fig. and Supplementary Information ). The sharp contrast between plant and microbial P pools (that is, >3.5× larger microbial P pool compared to the plant P pool) indicates a competitive imbalance for the labile soil inorganic P pool . In fact, only about 3% of soil P was readily extractable and therefore directly available for plant uptake (1.15 ± 0.28 g P per m 2 ; Fig. ); this small fraction of bioavailable P was independently supported by the Hedley fractionation estimate for this site (around 2%) (Fig. ). In plant and litter pools, the slow-turnover woody components contained 53% of the total P pool (that is, 0.36 ± 0.09, 0.30 ± 0.01, 0.15 ± 0.03 and 0.04 ± 0.04 g P per m 2 in sapwood, heartwood, coarse root and standing dead wood pools, respectively; Fig. ). An additional 4% was present on the forest floor as litter (that is, 0.06 ± 0.005 g P per m 2 ; Fig. ). The remaining 43% of the total plant and litter P was present in the fast-turnover pools, approximately equally split into canopy tree leaves, understorey shoots and fine roots (that is, 0.23 ± 0.02, 0.23 ± 0.04 and 0.24 ± 0.03 g P per m 2 , respectively; Fig. ). The P cycling in this forest was mainly driven by the annual turnover of the plant pools (Fig. ), with overstorey leaf production and understorey aboveground biomass production dominating the total plant P demand (both around 40%; Fig. ). A sizable proportion of the canopy P (14%) was consumed and deposited as frass by leaf-chewing insect herbivores, estimated at 0.04 ± 0.009 g P per m 2 per year. Total plant P resorption had an important role in meeting the annual plant nutrient demand (45%; 0.32 ± 0.03 g P per m 2 per year; Fig. ), with overstory trees being more efficient at resorbing P than understory grasses (Supplementary Information ). The resorption fraction for canopy leaves (55%) was slightly above the global average (48%) reported for evergreen broadleaf forests , suggesting an efficient use of P by trees at EucFACE. The remaining P demand was met by plant P uptake, estimated to be 0.39 ± 0.03 g m −2 yr −1 (Fig. ). This flux was considerably lower than the net P mineralization flux estimated for the top 60 cm of the soil column (0.67 ± 0.14 g m −2 yr −1 ; Fig. ), suggesting that the soil P supply was sufficient to meet the annual plant P demand. Nevertheless, given that 92% of the fine-root and similar fractions of microbial biomass and microbial P content were found in the top 30 cm of the soil , it is probable that plant P uptake occurred predominantly in the shallower soil layers. Fluxes for soil P leaching and atmospheric P deposition were negligible at the ecosystem scale (Fig. ), suggesting an essentially closed P cycle in this forest, which also means that the internal recycling of P is essential to support plant growth and metabolism in the EucFACE ecosystem.
2 Averaged among the experimental treatment plots (that is, FACE rings), most of the P-related variables did not exhibit significant eCO 2 responses at the 95% confidence level and the effect sizes were generally quite modest (Fig. , Extended Data Figs. and and Supplementary Information ); this result does not support hypothesis 2 that additional belowground C investment would increase soil P availability and plant P uptake under eCO 2 . The evidence for the differences in the budget numbers between control and eCO 2 treatment was statistically weak, reflecting a low sample size relative to the inherent variability in the field—a common drawback of FACE experiments. Nonetheless, this comprehensive P budget, taken as a whole, is still useful in that it provides a cohesive and systematic framework to examine the relative responses of different P-cycle components to altered CO 2 concentration. Here we used this budget to interpret the eCO 2 responses (Fig. and Extended Data Figs. and ). Our results show very weak evidence that the mean plant P demand to support annual production of plant biomass (overstorey and understorey combined) was higher under eCO 2 (+6% or +0.043 ± 0.055 g P per m 2 per year, mean ± s.e.m. of the treatment difference; Fig. ). This effect may reflect the increased biomass production in the understorey and the increased P concentration in the fine roots with eCO 2 (ref. ) (Extended Data Fig. ), and is unlikely to be met by plant P resorption response to eCO 2 (+1% or +0.003 ± 0.06 g P per m 2 per year; Fig. and Extended Data Fig. ). Changes in understorey species composition may have played a role in the observed changes of fine-root P concentration with eCO 2 (ref. ). Plant P uptake also showed weak evidence of a modest positive eCO 2 response (+8% or +0.033 ± 0.036 g P per m 2 per year; Fig. ). Comparing plant P uptake and plant P demand responses to eCO 2 suggests that additional plant P uptake would have a dominant role in meeting the extra demand if there was a detectable increase in plant P demand with eCO 2 . Furthermore, there was strong evidence that the mean residence time (MRT) of P in plants was lower in eCO 2 plots (−11% or −0.3 ± 0.12 years; Fig. ). This significant difference suggests a faster plant P cycling in eCO 2 plots; thus, the modest increase in plant P uptake with eCO 2 is possibly biologically important relative to the size of plant P pool. Similarly, plants, and particularly overstorey trees, have increased P-use efficiency in leaves to support C uptake with eCO 2 (moderate evidence; +10% or +531 ± 225 g C per g P; Fig. ). However, this did not lead to a more efficient use of P to support overall plant growth (+2% or +26 ± 143 g C per g P; Fig. and Extended Data Fig. ). This result suggests that plant growth responses to eCO 2 are probably proportional to the corresponding plant P uptake response, meaning that extra growth with eCO 2 would only be possible through additional plant P uptake. Nevertheless, there was little to no evidence for eCO 2 -induced responses of plant P uptake, net P mineralization (+0.013 ± 0.143 g P per m 2 per year; Fig. ), soil labile P concentration (Extended Data Fig. ) or soil phosphatase enzyme activity , despite the increased belowground C allocation . The large microbial P pool (Fig. ) and the sharp contrast between the amount of P stored in microorganisms and those actively recycled in the ecosystem to support annual plant production (Fig. ) suggests that microbial competition for P is strong. The annual incremental change in the microbial P pool did not exhibit any detectable eCO 2 response (−0.067 ± 0.71 g P per m 2 per year; Extended Data Fig. ), but any change in this quantity in response to eCO 2 would be small in absolute terms relative to the large total microbial P pool. Taken together, we infer that microbial competition for P may constrain the rate of soil P supply to plants by pre-emptive exploitation of the mineralized P, limiting the amount of soluble P remaining for plants and therefore precluding plant growth response to eCO 2 .
2 responses By constructing a comprehensive ecosystem P budget, we provide direct field-based evidence of how P, as a limiting macronutrient, is distributed through the plant–microorganism–soil continuum in a P-poor mature forest ecosystem, and how P availability constrains ecosystem productivity and its response to eCO 2 . In particular, soil microorganisms had amassed a large proportion of the soil P and displayed limited flexibility to respond to an eCO 2 -induced increase in belowground C investment from plants, thereby limiting the rate of plant-available soil P supply in response to eCO 2 . Notably, although we have relatively high statistical confidence with this interpretation, our results are subject to uncertainties due to the inherent spatial and temporal variability in this field-based, long-term experiment. Nevertheless, with the effect sizes and the confidence intervals reported, this first comprehensive ecosystem P budget still provides mechanistic insights into how P availability might broadly constrain ecosystem responses to eCO 2 in low-P forest ecosystems. The large proportion of biomass P stored in microorganisms in this forest is not unique , , and potentially reflects the advanced stage of ecosystem development , . In this respect, the mature, non-aggrading status of EucFACE differs from that of other forest FACE experiments . The lack of an apparent CO 2 effect on soil microbial biomass and P pool, despite the additional belowground C investment by plants, suggests that microorganisms are possibly conservative in releasing P in exchange for C in the low-P soils at EucFACE . However, given that microbial C-use efficiency typically declines with lower soil P availability , it is also possible that the eCO 2 -induced increase in belowground C allocation into the low-P soils at EucFACE was not enough to stimulate extra P mineralization, even after 6 years of CO 2 enrichment. The lack of response to eCO 2 in terms of the relative abundance of saprotrophic and mycorrhizal fungi in soil over the first 5 years supports this interpretation (Supplementary Information ). It remains to be seen whether the eCO 2 -induced increase in plant belowground C allocation leads to a more detectable response of P availability to eCO 2 being realized over longer time frames. The observed reduction in soil pH at depth is consistent with enhanced plant exudates and provides an indication that this may occur —it reflects an additional pathway through which soil P can be made available to plants under eCO 2 (ref. ). Extra plant nutrient uptake is also possible if plants invest in deeper or more extensive rooting systems under eCO 2 , enabling them to explore deeper layers of the soil, as suggested in other FACE studies , . Nevertheless, given that the likely increase in plant P demand with eCO 2 was largely a reflection of the enhanced understorey biomass turnover , understorey vegetation could be more competitive to acquire any newly available P with eCO 2 than overstory trees. Thus, long-term enhancement of tree growth and ecosystem C storage under eCO 2 remains questionable in this low-P forest system.
The response of P-limited forest ecosystems to eCO 2 is a major source of uncertainty in global land surface models , , , but is essential knowledge to inform climate change mitigation strategies . Current models generally predict that soil P availability would impose a critical constraint on the C-sequestration potential of forests globally , . However, models differ widely in their predicted CO 2 responses, in part because they adopt competing, plausible representations of P-cycle processes, particularly regarding plant strategies for P use and acquisition . Our complete assessment of the ecosystem P budget provides a rare opportunity to benchmark both the prediction accuracy and the verity of mechanisms assumed in the model simulations, especially for those concerning mature forests grown on low-P soils. Our results disagree with the predictions of two P-enabled models from before the start of EucFACE that suggested that soil P processes have no material effect on (that is, did not constrain) plant growth response to eCO 2 (ref. ). In fact, the strong microbial constraint observed at EucFACE highlights the need to more accurately represent the C cost for nutrient acquisition, as well as the biological and biochemical processes that regulate soil P cycling responses to eCO 2 (refs. , ). These processes are typically not well represented in land surface models , . For example, a recent multimodel intercomparison for a P-limited tropical rainforest showed that models with assumptions that upregulate plant P acquisition can effectively alleviate plant P limitation under eCO 2 as a consequence. However, they do so through an increased desorption of the less labile soil inorganic P pool, which, in the models, does not incur any C cost—an unrealistic assumption that does not involve any identified biological processes . Including a trade-off between plant C investment and nutrient acquisition in models has resulted in much lower global estimates of net primary production . However, there is still the need for further data to quantitatively characterize this trade-off and the processes involved in regulating its effectiveness under eCO 2 (refs. , ). In comparison, for models that allow upregulation of plant P-use efficiency such as through flexible plant tissue C:P stoichiometry, an initial positive biomass response to eCO 2 is commonly predicted . However, flexible stoichiometry also reduces litter quality for decomposition, thereby making nutrients increasingly unavailable to plants over time. It is therefore highly unlikely that these models will correctly simulate the observed faster plant P cycling with eCO 2 at EucFACE. Thus, models need to impose more realistic plasticity and biological limits in plant P-use efficiency . Currently, such improvements in models are limited by the availability of species-specific data on the relevant traits and their functional responses to eCO 2 variation , . Taken together, our results suggest that a solid understanding of C-nutrient feedbacks between plants, soils and microorganisms is critical to improve our ability to predict land C sink under climate change. Although plants, and overstorey trees in particular, were highly efficient at using P in the EucFACE mature forest ecosystem, they were not able to capture more P after 6 years of eCO 2 exposure, despite enhanced belowground C investment. The competitive superiority of the soil microbial community, relative to vegetation, with respect to P uptake provides one probable explanation for the lack of a tree growth response to eCO 2 . Our findings for this P-limited mature forest ecosystem in Australia are probably relevant to understanding the long-term capacity of forests of the tropics and subtropics to capitalize on the production-enhancement potential of rising atmospheric CO 2 , and therefore to help maintain the persistence of the global land C sink under climate change.
Site description The EucFACE experiment is located in a remnant native Cumberland Plain woodland on an ancient alluvial floodplain in western Sydney, Australia (33° 37′ S, 150° 44′ E, 30 m in elevation). The site has been unmanaged for over 90 years and is characterized by a humid temperate-subtropical transitional climate with a mean annual temperature of 17 °C and mean annual precipitation of about 800 mm (1881–2014, Bureau of Meteorology, station 067105 in Richmond, New South Wales, Australia; http://www.bom.gov.au ). The soil is formed from weakly organized alluvial deposits and is primarily an Aeric Podosol with areas of Densic Podosol (Australian soil classification) . The open woodland (600–1,000 trees per ha) is dominated by Eucalyptus tereticornis Sm. in the overstorey, while the understorey is dominated by the C 3 grass Microlaena stipoides (Labill.) R.Br , , and is co-dominated by ectomycorrhizal and arbuscular mycorrhizal fungi species in soils , . Evidence from a Eucalyptus woodland in Southwest Australia indicates that M. stipoides can release phytosiderophores (that is, organic exudates with strong chelating affinity) under low-P conditions to mobilize soil P . The vegetation within three randomly selected plots (~450 m 2 each) has been exposed to an eCO 2 treatment aiming for a CO 2 mole fraction of 150 μmol mol −1 above the ambient concentration since February 2013 (ref. ). The other three plots were used as control plots representing the aCO 2 treatment, with identical infrastructure and instrumentation as the treatment plots. An earlier study has estimated the ecosystem C budget for the site under both ambient and elevated CO 2 treatment ; here we report some relevant numbers in Extended Data Table . Total soil N for the top 10 cm of the soil is 151 ± 32 g N per m 2 , and available soil P is 0.24 ± 0.04 g P per m 2 , broadly comparable to soils in tropical and subtropical forests globally , (Extended Data Fig. ). The N:P ratio of mature canopy leaves is 23.1 ± 0.4 (ref. ), above the stoichiometric ratio of 20:1 to suggest likely P limitation (Extended Data Fig. ). Plant P pool and plant P to soil P ratio at EucFACE is also comparable to those seen in other temperate or tropical forests (Extended Data Fig. ). It has been shown that P fertilization in the same forest increases tree biomass, suggesting soil P availability is a limiting factor for plant productivity at the site . Estimates of P pools and fluxes We estimated plot-specific P pools and fluxes at EucFACE based on data collected over 2013–2018 (ref. ). We defined pools as a P reservoir and annual increments as the annual change in the size of this reservoir. We reported all P pools in the unit of g P per m 2 and all P fluxes in the unit of g P per m 2 per year. For data that have subreplicates within each treatment plot, we first calculated the plot means and the associated uncertainties (for example, standard errors), and then used these statistics to calculate the treatment means and their uncertainties. For data that have repeated measurements over time, our principle is to first derive an annual number and then calculate the multiyear means and their associated uncertainties. Pools were calculated by averaging all repeated measurements within a year. For fluxes with repeated measurements within a year, we calculated the annual totals considering the duration over which the flux was measured. Below, we report how individual P pools and fluxes were estimated in detail. Plant P pools The total standing plant P pool was estimated as the sum of all vegetation P pools, namely: canopy, stem, fine-root, coarse-root, understorey aboveground, standing dead wood and forest floor leaf litter P pools. We generally adopted a concentration by biomass approach to estimate the plot-specific plant P pools unless otherwise stated in the methods below. Fully expanded green mature leaves from the overstorey trees were collected from 3–4 dominant or co-dominant trees per plot in February, May and October between 2013 and 2018, whereas senesced leaves were collected from 2–3 litter traps (~0.2 m 2 ) per plot in each February between 2013 and 2018 (ref. ). Green understorey leaves were collected in 2013, 2015 and 2017, and senesced understorey leaves were collected in June 2017. Total P concentrations of green and senesced leaves were determined using a standard Kjeldahl digestion procedure, using pure sulfuric acid and hydrogen peroxide (H 2 O 2 , 30%). The total P concentrations of the Kjeldahl digests were colorimetrically analysed at 880 nm after a molybdate reaction in a discrete analyzer (AQ2 Discrete Analyzer, SEAL Analytical, EPA135 method). Overstorey leaf P and understorey aboveground P pools were estimated based on the respective plot-level mean P concentration of the green leaves and the corresponding biomass data . The forest-floor leaf litter P pool was estimated on the basis of the forest-floor leaf litter pool and the senesced overstorey leaf P concentration. Woody materials (that is, bark, sapwood and heartwood) were sampled in November 2015 from breast height in three dominant trees per FACE plot. Sapwood was defined as the outer 20 mm of wood beneath the bark , . All woody materials were digested using the Kjeldahl procedure and analysed for total P concentration by inductively coupled plasma optical emission spectroscopy (Perkin-Elmer). For all chemical analyses, we ran blind internal standards, using NIST Standard Reference Material 1515 (U.S. National Institute of Standards and Technology) for quality-control purposes. Sapwood and heartwood P pools were calculated using the respective P concentrations and biomass data at the plot level. The total wood P pool was estimated as the sum of the sapwood and heartwood P pools. Standing dead wood P pool was estimated on the basis of standing dead woody biomass data, which pooled all dead trees within each plot together. We assumed the same sapwood and heartwood partitioning and used the respective P concentrations to obtain the total standing dead wood P pool for each plot. Coarse-root P pool was calculated based on coarse-root biomass and sapwood P concentration, with coarse-root biomass estimated based on an allometric relationship developed for Australian forest species . The fine-root P concentration was determined on the basis of fine-root samples collected using eight intact soil cores over the top 30 cm of the soils within 4 randomly located, permanent 1 m × 1 m subplots in each FACE plot. Fine roots included roots of both overstorey and understorey vegetation, and were considered fine roots when their diameter was <2 mm and no secondary growth. The samples were collected using a soil auger (5 cm diameter) in February 2014, June 2014, September 2014, December 2014, May 2015, September 2015 and February 2016. After sorting and oven-drying, small representative subsamples (~100 mg) from each standing crop core for each date were ground on the Wig-L-Bug dental grinder (Crescent Dental Manufacturing). Total P concentration in the sample was assessed using X-ray fluorescence spectrometry (Epsilon 3XLE, PANalytical) . We then used fine-root biomass data collected in December 2013 to extrapolate the depth profile in fine-root biomass down to the 30–60 cm soil horizon. We considered the intermediate root class (that is, roots with a diameter between 2–3 mm) to have the same P concentration as those of the fine root, and we pooled the two root classes into the fine-root P pool. We estimated the fine-root P pool based on fine-root P concentration and the biomass data for each plot. Vegetation P fluxes Total plant P demand was estimated as the sum of all of the vegetation P fluxes to support the annual biomass growth, namely: canopy, stem, branch, bark, twig, reproduction, fine-root, coarse root and understorey aboveground P production fluxes. Each plant P production flux was calculated by multiplying the respective P concentration measured in the live plant organ and the corresponding annual biomass production rate. Specifically, canopy leaf, branch, bark, twig and reproductive structure biomass production fluxes were estimated on the basis of the monthly litter data collected from circular fine-mesh traps (~0.2 m 2 ) at eight random locations for each FACE plot . We independently estimated a herbivory consumption flux of the canopy leaves and added this flux on top of the canopy leaf litter flux to obtain the total canopy leaf production flux , , . Considering an approximately annual canopy leaf lifespan , the estimated canopy leaf P production flux was slightly more than sufficient to replace the entire canopy P pool annually. The canopy P pool was a conservative estimate as it takes the mean of the time-varying canopy size, whereas the canopy leaf P production flux takes the cumulative leaf litterfall. The production fluxes of wood and coarse root were estimated based on the annual incremental change of wood and coarse-root biomass, respectively. The production flux of fine roots was estimated based on samples collected from in-growth cores at four locations per plot. The production flux of the understorey aboveground component was estimated on the basis of biomass clippings taken between 2014 and 2017, assuming one understorey turnover per harvest interval . The P concentrations in green canopy and understorey leaves were used to calculate canopy and understorey aboveground P production fluxes. The sapwood P concentration was used to calculate wood and coarse-root P production fluxes. P concentrations in bark, twig, reproductive structure and branch were assumed to be the same as those in sapwood. Plant P litter fluxes of canopy and understorey leaves were calculated using the respective litter production flux and the P concentration in senesced plant tissue. Litter P fluxes of bark, branch, twig and reproductive structure were assumed to be the same as their production P fluxes. Frass was collected monthly for 2 years from all 8 litter traps per FACE plot between late 2012 and 2014 (ref. ). Frass was oven-dried at 40 °C for 72 h. A microscope was used to determine the frass of leaf-chewing herbivores using shape, texture and colour, and excluding lerps and starchy excretions by plant-sucking psyllids . After sorting, frass samples were weighed, pooled by plot and ground into a fine powder for chemical analysis. Monthly P concentrations were determined by placing 50 mg of sample in a muffle furnace (550 °C) for 8 h. The resulting ash was dissolved in 5 ml of 1% perchloric acid and the total P was quantified using the ascorbic acid–molybdate reaction . Frass P litter flux was estimated on the basis of the frass P concentration and the corresponding litter flux was measured from the litter traps. The plant P-resorption flux was estimated as the sum of canopy, understory aboveground, sapwood, fine-root and coarse-root P resorption fluxes. Plant P-resorption rates for the canopy and understorey leaves were estimated on the basis of the corresponding difference between fully expanded live and senesced leaf P concentrations. The sapwood P-resorption flux was estimated as the difference in P concentrations between sapwood and heartwood, and we used the same fraction to estimate coarse-root resorption flux. The fine-root P-resorption coefficient was assumed to be a constant of 50% due to the difficulty in separating live and dead components of the fine roots . Total plant P uptake was estimated as the net difference between plant P-demand and plant P-resorption fluxes. Overstorey and understorey P-use efficiency to support the respective photosynthesis were calculated as the respective gross primary production divided by their corresponding leaf P-production flux. The plant P-use efficiency was estimated as the total plant P demand over the net primary production of both overstorey and understorey vegetation, because fine-root production includes contributions from both overstorey and understorey plants. The plant P MRT (years) was calculated as the standing vegetation P pool (excluding the heartwood and coarse root) over the plant P-uptake flux. Soil P pools Soil P pools were determined based on soil collected from four 2 m × 2 m subplots within each of the six FACE plots. A grid system was assigned to each soil subplot, and sampling locations were noted to ensure the same location was not sampled more than once. At the time of sampling, three soil cores (3 cm diameter) were collected from each sample location and pooled into one composite sample for each subplot. Pooled soils were sieved (<2 mm). Soils were repeatedly sampled over the top 10 cm between 2013 and 2015, once for the 10–30 cm depth in 2013 and once in 2017 for 0–10 cm, 10–30 cm and 30 cm to a hard clay layer located at variable depth across the site (median 56 cm, range 35–85 cm). P pools were calculated on the basis of the measured P concentrations and mean soil bulk density measures at each depth class for each FACE plot (Extended Data Table ). The pool size for 2017 up to 60 cm depth was calculated using the concentration measured below 30 cm and to the clay layer. In soil from 2013 to 2015, the total soil P concentration was determined on finely milled (MM 400, Retsche) oven-dried (40 °C, 48 h) soils after aqua regia digestion and inductively coupled plasma mass spectrometry (ICP-MS) analysis (Environmental Analysis Laboratory, Southern Cross University). For 2017 soils, total, organic and inorganic soil P were determined by two methods. Using an approach described previously , 1 g of oven-dried (40 °C, 48 h) finely ground (MM 400, Retsche) soil was either ignited for 1 h at 550 °C (for total P) or extracted untreated (for inorganic P) for 16 h with 25 ml of 0.5 M H 2 SO 4 and the extracts passed through a 0.2 µm filter before colorimetric analysis . Organic P was determined as the difference between total P and inorganic P. As the method has been shown to overestimate organic P in highly weathered soils , we also used a previously described approach whereby 2 g of milled soil was extracted for 16 h with 30 ml in a 0.25 M NaOH + 0.05 M EDTA solution. After passing the extract through a 0.2 µm filter, the filtrates were analysed for total P concentration (ICP-MS) and inorganic P using the Malachite Green method and organic P was computed as the difference between total P and inorganic P. Values obtained for total P, inorganic P and organic P that were determined using both methods were similar and values for the respective P classes were averaged across methods. Total P values determined in 2017 were also similar to those obtained previously using the aqua regia method. To determine operationally defined soil P pools, soils collected from the top 10 cm of the soil in 2013 were sequentially extracted with 1 M NH 4 Cl, 0.5 M NaHCO 3 (pH 8.5), 0.1 M NaOH, 1 M HCl and 0.1 M NaOH according to a modified Hedley fractionation method . Each extract was analysed colorimetrically for determination of inorganic P using the Malachite Green method . To determine organic P, a subsample of extracts (2.5 ml) was digested with 0.55 ml 11 M H 2 SO 4 and 1.0 ml 50% ammonium peroxydisulfate as previously described , and inorganic P determined as before. Organic P was defined as the difference in inorganic P between digested and undigested samples. The occluded P was defined as the total P (as determined by aqua regia, described above) minus the sum of all other P concentrations . We used the Hedley fractionation method to discriminate soil P pools of different chemical extractability as a potential indicator of soil P bioavailability. Notably, this method may introduce artifacts in certain chemical fraction estimates . We therefore took a conservative approach by grouping less-available soil P fractions as a residual P pool, and reported the more easily extractable fractions separately, which we operationally defined as exchangeable inorganic P, exchangeable organic P and moderately labile organic P. The extractable inorganic P pool (that is, labile P i ) was determined quarterly between 2013 and 2015 on 0–10 cm layer soils using the Bray-1 P extraction , method, and once in 2017 (0–10 cm, 10–30 cm and 30–60 cm) . Phosphate concentrations in soil extracts were determined colorimetrically using the molybdate blue assay (AQ2 Discrete Analyzer SEAL Analytical) using an established method for available P (EPA-118-A rev.5). The proportion of change in concentration across depth in 2017 was applied to the averaged 2013–2015 measurements to estimate the concentrations across 10–30 cm and 30–60 cm depths. The microbial P pool, comprising bacteria, archaea, protozoa and fungi, was assessed within 2 days of sampling using chloroform fumigation extraction , and estimated quarterly between 2014 and 2015 for 0–10 cm and once in 2017 (0–10 cm, 10–30 cm and 30–60 cm). In brief, 3.75 g soil was fumigated in the dark for 24 h. Phosphorus was extracted from fumigated and unfumigated samples using the Bray-1 P extraction method as above. Microbial biomass P was determined as the difference in extractable P between fumigated and unfumigated samples. A conversion factor of 0.4 was used to calculate the microbial P pool . The proportion of change in microbial P concentration across depth measured in 2017 was applied to the averaged 2014–2015 measurements per plot (0–10 cm) to estimate the concentrations across 10–30 cm and 30–60 cm depths. Soil P fluxes The soil net P-mineralization flux (gross mineralization minus gross immobilization) was determined in situ at the 0–10 cm depth on a quarterly basis as the change in phosphate concentration between two timepoints between January 2013 and January 2016 using PVC pipes . Soil net P-mineralization flux estimated based using this method is subject to uncertainty because it does not include contributions from plant roots that could potentially affect the C input and P exchange in the PVC pipes. However, the net soil P mineralization flux was corroborated by estimates from other measurements that integrate all plant and microbial processes, namely microbial P, phosphatase enzyme, available P concentrations and soil P concentrations measured using the Hedley fractionation method. To estimate net P-mineralization fluxes in deeper soil layers (10–30 cm, 30–60 cm), we assumed that the net mineralization activity was proportional to organic matter content, microbial biomass and fine-root biomass, and applied the proportion of change of measured soil C, microbial C and fine-root C across depth for each plot to the 0–10 cm measured net P-mineralization flux. The values obtained with the three variables were very similar, differing by 4.5%; we therefore report values estimated using soil C only. The soil P-leaching flux was estimated based on phosphate concentration collected in deeper soils (35–75 cm) using a water suction lysimeter , assuming a water efflux of 20 ml m −2 d −1 through drainage at the site. The atmospheric P-deposition flux at the site was extracted from a gridded dataset . Statistical analyses We calculated treatment averages and their s.d. based on the plot-level data ( n = 3). We calculated the s.d. for the aggregated pools and fluxes (for example, total plant P pool) by summing the individual components that constitute the aggregated pool and flux for each plot and computing the s.d. within each treatment ( n = 3). The CO 2 treatment effect was calculated as the net difference between eCO 2 and aCO 2 plots, with its s.d. (SD eff ) calculated by pooling the s.d. values of the aCO 2 and eCO 2 treatments (SD amb and SD ele , respectively) as follows: [12pt]{minimal}
$${{}}_{{}}=}}_{{}}^{2}+\,{{}}_{{}}^{2}}{2}}$$ SD eff = SD amb 2 + SD ele 2 2 Owing to long-term environmental fluctuation and spatial heterogeneity across treatment plots and the limited number of replication in large-scale field-based experiment , , , , the classic dichotomous approach of statistical test based on P value alone may underestimate the more subtle responses in manipulative experiments such as EucFACE. We therefore used multiple analytical approaches to robustly quantify and interpret the CO 2 responses, including using confidence intervals to indicate the effect size , (Fig. and Extended Data Figs. and ), using linear mixed-effect models to report statistical results (Supplementary Information ), and using bootstrap resampling as a sensitivity test (Extended Data Figs. and , Extended Data Table and Supplementary Information ). Reporting the means and confidence intervals is a useful way of assessing uncertainties in data, which has been shown to be more effective to assess the relationships within data than the use of P values alone, regardless of the statistical significance , . We calculated the confidence interval for the CO 2 effect size (CI eff ) as: [12pt]{minimal}
$${{}}_{{}}={t}_{95}{{}}_{{}}_{1}}+_{2}}}$$ CI eff = t 95 SD eff 1 n 1 + 1 n 2 Where t 95 is the critical value of the t- distribution at 95% with ( n 1 + n 2 −2) d.f., and n 1 = n 2 = 3 is the sample size for each CO 2 treatment. Taking the same approach, we also calculated the confidence intervals at 85% and 75%, respectively, to demonstrate the decreasing level of confidence in the reported CO 2 effect size. For the mean CO 2 effect size to be statistically significant from the null hypothesis at the 95%, 85% and 75% confidence levels, the corresponding confidence intervals must not overlap with zero. To investigate the main CO 2 effect statistically and how temporal fluctuation may have affected the CO 2 effect (or the lack thereof), we built a linear mixed-effect model with CO 2 treatment, year and their interaction as fixed factors and treatment plot as a random factor. We followed the conventional approach to interpret these results (that is, P -value cut-off < 0.05 as an indication for statistical significance between the ambient and elevated CO 2 treatment plots). The results of the linear mixed-effect models indicate a generally consistent main CO 2 effect across time (Supplementary Information ). We therefore reported only the main CO 2 effect based on the time-averaged plot-level data in the main text, and took an evidence-based approach to interpret the statistical significance of these results. Moreover, to quantify the uncertainties associated with temporal fluctuations in the measurements, we developed a bootstrapping method by randomly resampling datapoints from each CO 2 treatment 1,000 times without ignoring the temporal fluctuation in the measurements. This approach can be considered as a sensitivity test. We then estimated the 95%, 85% and 75% confidence intervals of the bootstrapped CO 2 effect based on the resampled data . Results of this analysis suggest that the uncertainties associated with temporal fluctuations in the data do not affect the findings described in the main text (Extended Data Figs. – and Supplementary Information ). Reporting summary Further information on research design is available in the linked to this article.
The EucFACE experiment is located in a remnant native Cumberland Plain woodland on an ancient alluvial floodplain in western Sydney, Australia (33° 37′ S, 150° 44′ E, 30 m in elevation). The site has been unmanaged for over 90 years and is characterized by a humid temperate-subtropical transitional climate with a mean annual temperature of 17 °C and mean annual precipitation of about 800 mm (1881–2014, Bureau of Meteorology, station 067105 in Richmond, New South Wales, Australia; http://www.bom.gov.au ). The soil is formed from weakly organized alluvial deposits and is primarily an Aeric Podosol with areas of Densic Podosol (Australian soil classification) . The open woodland (600–1,000 trees per ha) is dominated by Eucalyptus tereticornis Sm. in the overstorey, while the understorey is dominated by the C 3 grass Microlaena stipoides (Labill.) R.Br , , and is co-dominated by ectomycorrhizal and arbuscular mycorrhizal fungi species in soils , . Evidence from a Eucalyptus woodland in Southwest Australia indicates that M. stipoides can release phytosiderophores (that is, organic exudates with strong chelating affinity) under low-P conditions to mobilize soil P . The vegetation within three randomly selected plots (~450 m 2 each) has been exposed to an eCO 2 treatment aiming for a CO 2 mole fraction of 150 μmol mol −1 above the ambient concentration since February 2013 (ref. ). The other three plots were used as control plots representing the aCO 2 treatment, with identical infrastructure and instrumentation as the treatment plots. An earlier study has estimated the ecosystem C budget for the site under both ambient and elevated CO 2 treatment ; here we report some relevant numbers in Extended Data Table . Total soil N for the top 10 cm of the soil is 151 ± 32 g N per m 2 , and available soil P is 0.24 ± 0.04 g P per m 2 , broadly comparable to soils in tropical and subtropical forests globally , (Extended Data Fig. ). The N:P ratio of mature canopy leaves is 23.1 ± 0.4 (ref. ), above the stoichiometric ratio of 20:1 to suggest likely P limitation (Extended Data Fig. ). Plant P pool and plant P to soil P ratio at EucFACE is also comparable to those seen in other temperate or tropical forests (Extended Data Fig. ). It has been shown that P fertilization in the same forest increases tree biomass, suggesting soil P availability is a limiting factor for plant productivity at the site .
We estimated plot-specific P pools and fluxes at EucFACE based on data collected over 2013–2018 (ref. ). We defined pools as a P reservoir and annual increments as the annual change in the size of this reservoir. We reported all P pools in the unit of g P per m 2 and all P fluxes in the unit of g P per m 2 per year. For data that have subreplicates within each treatment plot, we first calculated the plot means and the associated uncertainties (for example, standard errors), and then used these statistics to calculate the treatment means and their uncertainties. For data that have repeated measurements over time, our principle is to first derive an annual number and then calculate the multiyear means and their associated uncertainties. Pools were calculated by averaging all repeated measurements within a year. For fluxes with repeated measurements within a year, we calculated the annual totals considering the duration over which the flux was measured. Below, we report how individual P pools and fluxes were estimated in detail. Plant P pools The total standing plant P pool was estimated as the sum of all vegetation P pools, namely: canopy, stem, fine-root, coarse-root, understorey aboveground, standing dead wood and forest floor leaf litter P pools. We generally adopted a concentration by biomass approach to estimate the plot-specific plant P pools unless otherwise stated in the methods below. Fully expanded green mature leaves from the overstorey trees were collected from 3–4 dominant or co-dominant trees per plot in February, May and October between 2013 and 2018, whereas senesced leaves were collected from 2–3 litter traps (~0.2 m 2 ) per plot in each February between 2013 and 2018 (ref. ). Green understorey leaves were collected in 2013, 2015 and 2017, and senesced understorey leaves were collected in June 2017. Total P concentrations of green and senesced leaves were determined using a standard Kjeldahl digestion procedure, using pure sulfuric acid and hydrogen peroxide (H 2 O 2 , 30%). The total P concentrations of the Kjeldahl digests were colorimetrically analysed at 880 nm after a molybdate reaction in a discrete analyzer (AQ2 Discrete Analyzer, SEAL Analytical, EPA135 method). Overstorey leaf P and understorey aboveground P pools were estimated based on the respective plot-level mean P concentration of the green leaves and the corresponding biomass data . The forest-floor leaf litter P pool was estimated on the basis of the forest-floor leaf litter pool and the senesced overstorey leaf P concentration. Woody materials (that is, bark, sapwood and heartwood) were sampled in November 2015 from breast height in three dominant trees per FACE plot. Sapwood was defined as the outer 20 mm of wood beneath the bark , . All woody materials were digested using the Kjeldahl procedure and analysed for total P concentration by inductively coupled plasma optical emission spectroscopy (Perkin-Elmer). For all chemical analyses, we ran blind internal standards, using NIST Standard Reference Material 1515 (U.S. National Institute of Standards and Technology) for quality-control purposes. Sapwood and heartwood P pools were calculated using the respective P concentrations and biomass data at the plot level. The total wood P pool was estimated as the sum of the sapwood and heartwood P pools. Standing dead wood P pool was estimated on the basis of standing dead woody biomass data, which pooled all dead trees within each plot together. We assumed the same sapwood and heartwood partitioning and used the respective P concentrations to obtain the total standing dead wood P pool for each plot. Coarse-root P pool was calculated based on coarse-root biomass and sapwood P concentration, with coarse-root biomass estimated based on an allometric relationship developed for Australian forest species . The fine-root P concentration was determined on the basis of fine-root samples collected using eight intact soil cores over the top 30 cm of the soils within 4 randomly located, permanent 1 m × 1 m subplots in each FACE plot. Fine roots included roots of both overstorey and understorey vegetation, and were considered fine roots when their diameter was <2 mm and no secondary growth. The samples were collected using a soil auger (5 cm diameter) in February 2014, June 2014, September 2014, December 2014, May 2015, September 2015 and February 2016. After sorting and oven-drying, small representative subsamples (~100 mg) from each standing crop core for each date were ground on the Wig-L-Bug dental grinder (Crescent Dental Manufacturing). Total P concentration in the sample was assessed using X-ray fluorescence spectrometry (Epsilon 3XLE, PANalytical) . We then used fine-root biomass data collected in December 2013 to extrapolate the depth profile in fine-root biomass down to the 30–60 cm soil horizon. We considered the intermediate root class (that is, roots with a diameter between 2–3 mm) to have the same P concentration as those of the fine root, and we pooled the two root classes into the fine-root P pool. We estimated the fine-root P pool based on fine-root P concentration and the biomass data for each plot. Vegetation P fluxes Total plant P demand was estimated as the sum of all of the vegetation P fluxes to support the annual biomass growth, namely: canopy, stem, branch, bark, twig, reproduction, fine-root, coarse root and understorey aboveground P production fluxes. Each plant P production flux was calculated by multiplying the respective P concentration measured in the live plant organ and the corresponding annual biomass production rate. Specifically, canopy leaf, branch, bark, twig and reproductive structure biomass production fluxes were estimated on the basis of the monthly litter data collected from circular fine-mesh traps (~0.2 m 2 ) at eight random locations for each FACE plot . We independently estimated a herbivory consumption flux of the canopy leaves and added this flux on top of the canopy leaf litter flux to obtain the total canopy leaf production flux , , . Considering an approximately annual canopy leaf lifespan , the estimated canopy leaf P production flux was slightly more than sufficient to replace the entire canopy P pool annually. The canopy P pool was a conservative estimate as it takes the mean of the time-varying canopy size, whereas the canopy leaf P production flux takes the cumulative leaf litterfall. The production fluxes of wood and coarse root were estimated based on the annual incremental change of wood and coarse-root biomass, respectively. The production flux of fine roots was estimated based on samples collected from in-growth cores at four locations per plot. The production flux of the understorey aboveground component was estimated on the basis of biomass clippings taken between 2014 and 2017, assuming one understorey turnover per harvest interval . The P concentrations in green canopy and understorey leaves were used to calculate canopy and understorey aboveground P production fluxes. The sapwood P concentration was used to calculate wood and coarse-root P production fluxes. P concentrations in bark, twig, reproductive structure and branch were assumed to be the same as those in sapwood. Plant P litter fluxes of canopy and understorey leaves were calculated using the respective litter production flux and the P concentration in senesced plant tissue. Litter P fluxes of bark, branch, twig and reproductive structure were assumed to be the same as their production P fluxes. Frass was collected monthly for 2 years from all 8 litter traps per FACE plot between late 2012 and 2014 (ref. ). Frass was oven-dried at 40 °C for 72 h. A microscope was used to determine the frass of leaf-chewing herbivores using shape, texture and colour, and excluding lerps and starchy excretions by plant-sucking psyllids . After sorting, frass samples were weighed, pooled by plot and ground into a fine powder for chemical analysis. Monthly P concentrations were determined by placing 50 mg of sample in a muffle furnace (550 °C) for 8 h. The resulting ash was dissolved in 5 ml of 1% perchloric acid and the total P was quantified using the ascorbic acid–molybdate reaction . Frass P litter flux was estimated on the basis of the frass P concentration and the corresponding litter flux was measured from the litter traps. The plant P-resorption flux was estimated as the sum of canopy, understory aboveground, sapwood, fine-root and coarse-root P resorption fluxes. Plant P-resorption rates for the canopy and understorey leaves were estimated on the basis of the corresponding difference between fully expanded live and senesced leaf P concentrations. The sapwood P-resorption flux was estimated as the difference in P concentrations between sapwood and heartwood, and we used the same fraction to estimate coarse-root resorption flux. The fine-root P-resorption coefficient was assumed to be a constant of 50% due to the difficulty in separating live and dead components of the fine roots . Total plant P uptake was estimated as the net difference between plant P-demand and plant P-resorption fluxes. Overstorey and understorey P-use efficiency to support the respective photosynthesis were calculated as the respective gross primary production divided by their corresponding leaf P-production flux. The plant P-use efficiency was estimated as the total plant P demand over the net primary production of both overstorey and understorey vegetation, because fine-root production includes contributions from both overstorey and understorey plants. The plant P MRT (years) was calculated as the standing vegetation P pool (excluding the heartwood and coarse root) over the plant P-uptake flux. Soil P pools Soil P pools were determined based on soil collected from four 2 m × 2 m subplots within each of the six FACE plots. A grid system was assigned to each soil subplot, and sampling locations were noted to ensure the same location was not sampled more than once. At the time of sampling, three soil cores (3 cm diameter) were collected from each sample location and pooled into one composite sample for each subplot. Pooled soils were sieved (<2 mm). Soils were repeatedly sampled over the top 10 cm between 2013 and 2015, once for the 10–30 cm depth in 2013 and once in 2017 for 0–10 cm, 10–30 cm and 30 cm to a hard clay layer located at variable depth across the site (median 56 cm, range 35–85 cm). P pools were calculated on the basis of the measured P concentrations and mean soil bulk density measures at each depth class for each FACE plot (Extended Data Table ). The pool size for 2017 up to 60 cm depth was calculated using the concentration measured below 30 cm and to the clay layer. In soil from 2013 to 2015, the total soil P concentration was determined on finely milled (MM 400, Retsche) oven-dried (40 °C, 48 h) soils after aqua regia digestion and inductively coupled plasma mass spectrometry (ICP-MS) analysis (Environmental Analysis Laboratory, Southern Cross University). For 2017 soils, total, organic and inorganic soil P were determined by two methods. Using an approach described previously , 1 g of oven-dried (40 °C, 48 h) finely ground (MM 400, Retsche) soil was either ignited for 1 h at 550 °C (for total P) or extracted untreated (for inorganic P) for 16 h with 25 ml of 0.5 M H 2 SO 4 and the extracts passed through a 0.2 µm filter before colorimetric analysis . Organic P was determined as the difference between total P and inorganic P. As the method has been shown to overestimate organic P in highly weathered soils , we also used a previously described approach whereby 2 g of milled soil was extracted for 16 h with 30 ml in a 0.25 M NaOH + 0.05 M EDTA solution. After passing the extract through a 0.2 µm filter, the filtrates were analysed for total P concentration (ICP-MS) and inorganic P using the Malachite Green method and organic P was computed as the difference between total P and inorganic P. Values obtained for total P, inorganic P and organic P that were determined using both methods were similar and values for the respective P classes were averaged across methods. Total P values determined in 2017 were also similar to those obtained previously using the aqua regia method. To determine operationally defined soil P pools, soils collected from the top 10 cm of the soil in 2013 were sequentially extracted with 1 M NH 4 Cl, 0.5 M NaHCO 3 (pH 8.5), 0.1 M NaOH, 1 M HCl and 0.1 M NaOH according to a modified Hedley fractionation method . Each extract was analysed colorimetrically for determination of inorganic P using the Malachite Green method . To determine organic P, a subsample of extracts (2.5 ml) was digested with 0.55 ml 11 M H 2 SO 4 and 1.0 ml 50% ammonium peroxydisulfate as previously described , and inorganic P determined as before. Organic P was defined as the difference in inorganic P between digested and undigested samples. The occluded P was defined as the total P (as determined by aqua regia, described above) minus the sum of all other P concentrations . We used the Hedley fractionation method to discriminate soil P pools of different chemical extractability as a potential indicator of soil P bioavailability. Notably, this method may introduce artifacts in certain chemical fraction estimates . We therefore took a conservative approach by grouping less-available soil P fractions as a residual P pool, and reported the more easily extractable fractions separately, which we operationally defined as exchangeable inorganic P, exchangeable organic P and moderately labile organic P. The extractable inorganic P pool (that is, labile P i ) was determined quarterly between 2013 and 2015 on 0–10 cm layer soils using the Bray-1 P extraction , method, and once in 2017 (0–10 cm, 10–30 cm and 30–60 cm) . Phosphate concentrations in soil extracts were determined colorimetrically using the molybdate blue assay (AQ2 Discrete Analyzer SEAL Analytical) using an established method for available P (EPA-118-A rev.5). The proportion of change in concentration across depth in 2017 was applied to the averaged 2013–2015 measurements to estimate the concentrations across 10–30 cm and 30–60 cm depths. The microbial P pool, comprising bacteria, archaea, protozoa and fungi, was assessed within 2 days of sampling using chloroform fumigation extraction , and estimated quarterly between 2014 and 2015 for 0–10 cm and once in 2017 (0–10 cm, 10–30 cm and 30–60 cm). In brief, 3.75 g soil was fumigated in the dark for 24 h. Phosphorus was extracted from fumigated and unfumigated samples using the Bray-1 P extraction method as above. Microbial biomass P was determined as the difference in extractable P between fumigated and unfumigated samples. A conversion factor of 0.4 was used to calculate the microbial P pool . The proportion of change in microbial P concentration across depth measured in 2017 was applied to the averaged 2014–2015 measurements per plot (0–10 cm) to estimate the concentrations across 10–30 cm and 30–60 cm depths. Soil P fluxes The soil net P-mineralization flux (gross mineralization minus gross immobilization) was determined in situ at the 0–10 cm depth on a quarterly basis as the change in phosphate concentration between two timepoints between January 2013 and January 2016 using PVC pipes . Soil net P-mineralization flux estimated based using this method is subject to uncertainty because it does not include contributions from plant roots that could potentially affect the C input and P exchange in the PVC pipes. However, the net soil P mineralization flux was corroborated by estimates from other measurements that integrate all plant and microbial processes, namely microbial P, phosphatase enzyme, available P concentrations and soil P concentrations measured using the Hedley fractionation method. To estimate net P-mineralization fluxes in deeper soil layers (10–30 cm, 30–60 cm), we assumed that the net mineralization activity was proportional to organic matter content, microbial biomass and fine-root biomass, and applied the proportion of change of measured soil C, microbial C and fine-root C across depth for each plot to the 0–10 cm measured net P-mineralization flux. The values obtained with the three variables were very similar, differing by 4.5%; we therefore report values estimated using soil C only. The soil P-leaching flux was estimated based on phosphate concentration collected in deeper soils (35–75 cm) using a water suction lysimeter , assuming a water efflux of 20 ml m −2 d −1 through drainage at the site. The atmospheric P-deposition flux at the site was extracted from a gridded dataset .
The total standing plant P pool was estimated as the sum of all vegetation P pools, namely: canopy, stem, fine-root, coarse-root, understorey aboveground, standing dead wood and forest floor leaf litter P pools. We generally adopted a concentration by biomass approach to estimate the plot-specific plant P pools unless otherwise stated in the methods below. Fully expanded green mature leaves from the overstorey trees were collected from 3–4 dominant or co-dominant trees per plot in February, May and October between 2013 and 2018, whereas senesced leaves were collected from 2–3 litter traps (~0.2 m 2 ) per plot in each February between 2013 and 2018 (ref. ). Green understorey leaves were collected in 2013, 2015 and 2017, and senesced understorey leaves were collected in June 2017. Total P concentrations of green and senesced leaves were determined using a standard Kjeldahl digestion procedure, using pure sulfuric acid and hydrogen peroxide (H 2 O 2 , 30%). The total P concentrations of the Kjeldahl digests were colorimetrically analysed at 880 nm after a molybdate reaction in a discrete analyzer (AQ2 Discrete Analyzer, SEAL Analytical, EPA135 method). Overstorey leaf P and understorey aboveground P pools were estimated based on the respective plot-level mean P concentration of the green leaves and the corresponding biomass data . The forest-floor leaf litter P pool was estimated on the basis of the forest-floor leaf litter pool and the senesced overstorey leaf P concentration. Woody materials (that is, bark, sapwood and heartwood) were sampled in November 2015 from breast height in three dominant trees per FACE plot. Sapwood was defined as the outer 20 mm of wood beneath the bark , . All woody materials were digested using the Kjeldahl procedure and analysed for total P concentration by inductively coupled plasma optical emission spectroscopy (Perkin-Elmer). For all chemical analyses, we ran blind internal standards, using NIST Standard Reference Material 1515 (U.S. National Institute of Standards and Technology) for quality-control purposes. Sapwood and heartwood P pools were calculated using the respective P concentrations and biomass data at the plot level. The total wood P pool was estimated as the sum of the sapwood and heartwood P pools. Standing dead wood P pool was estimated on the basis of standing dead woody biomass data, which pooled all dead trees within each plot together. We assumed the same sapwood and heartwood partitioning and used the respective P concentrations to obtain the total standing dead wood P pool for each plot. Coarse-root P pool was calculated based on coarse-root biomass and sapwood P concentration, with coarse-root biomass estimated based on an allometric relationship developed for Australian forest species . The fine-root P concentration was determined on the basis of fine-root samples collected using eight intact soil cores over the top 30 cm of the soils within 4 randomly located, permanent 1 m × 1 m subplots in each FACE plot. Fine roots included roots of both overstorey and understorey vegetation, and were considered fine roots when their diameter was <2 mm and no secondary growth. The samples were collected using a soil auger (5 cm diameter) in February 2014, June 2014, September 2014, December 2014, May 2015, September 2015 and February 2016. After sorting and oven-drying, small representative subsamples (~100 mg) from each standing crop core for each date were ground on the Wig-L-Bug dental grinder (Crescent Dental Manufacturing). Total P concentration in the sample was assessed using X-ray fluorescence spectrometry (Epsilon 3XLE, PANalytical) . We then used fine-root biomass data collected in December 2013 to extrapolate the depth profile in fine-root biomass down to the 30–60 cm soil horizon. We considered the intermediate root class (that is, roots with a diameter between 2–3 mm) to have the same P concentration as those of the fine root, and we pooled the two root classes into the fine-root P pool. We estimated the fine-root P pool based on fine-root P concentration and the biomass data for each plot.
Total plant P demand was estimated as the sum of all of the vegetation P fluxes to support the annual biomass growth, namely: canopy, stem, branch, bark, twig, reproduction, fine-root, coarse root and understorey aboveground P production fluxes. Each plant P production flux was calculated by multiplying the respective P concentration measured in the live plant organ and the corresponding annual biomass production rate. Specifically, canopy leaf, branch, bark, twig and reproductive structure biomass production fluxes were estimated on the basis of the monthly litter data collected from circular fine-mesh traps (~0.2 m 2 ) at eight random locations for each FACE plot . We independently estimated a herbivory consumption flux of the canopy leaves and added this flux on top of the canopy leaf litter flux to obtain the total canopy leaf production flux , , . Considering an approximately annual canopy leaf lifespan , the estimated canopy leaf P production flux was slightly more than sufficient to replace the entire canopy P pool annually. The canopy P pool was a conservative estimate as it takes the mean of the time-varying canopy size, whereas the canopy leaf P production flux takes the cumulative leaf litterfall. The production fluxes of wood and coarse root were estimated based on the annual incremental change of wood and coarse-root biomass, respectively. The production flux of fine roots was estimated based on samples collected from in-growth cores at four locations per plot. The production flux of the understorey aboveground component was estimated on the basis of biomass clippings taken between 2014 and 2017, assuming one understorey turnover per harvest interval . The P concentrations in green canopy and understorey leaves were used to calculate canopy and understorey aboveground P production fluxes. The sapwood P concentration was used to calculate wood and coarse-root P production fluxes. P concentrations in bark, twig, reproductive structure and branch were assumed to be the same as those in sapwood. Plant P litter fluxes of canopy and understorey leaves were calculated using the respective litter production flux and the P concentration in senesced plant tissue. Litter P fluxes of bark, branch, twig and reproductive structure were assumed to be the same as their production P fluxes. Frass was collected monthly for 2 years from all 8 litter traps per FACE plot between late 2012 and 2014 (ref. ). Frass was oven-dried at 40 °C for 72 h. A microscope was used to determine the frass of leaf-chewing herbivores using shape, texture and colour, and excluding lerps and starchy excretions by plant-sucking psyllids . After sorting, frass samples were weighed, pooled by plot and ground into a fine powder for chemical analysis. Monthly P concentrations were determined by placing 50 mg of sample in a muffle furnace (550 °C) for 8 h. The resulting ash was dissolved in 5 ml of 1% perchloric acid and the total P was quantified using the ascorbic acid–molybdate reaction . Frass P litter flux was estimated on the basis of the frass P concentration and the corresponding litter flux was measured from the litter traps. The plant P-resorption flux was estimated as the sum of canopy, understory aboveground, sapwood, fine-root and coarse-root P resorption fluxes. Plant P-resorption rates for the canopy and understorey leaves were estimated on the basis of the corresponding difference between fully expanded live and senesced leaf P concentrations. The sapwood P-resorption flux was estimated as the difference in P concentrations between sapwood and heartwood, and we used the same fraction to estimate coarse-root resorption flux. The fine-root P-resorption coefficient was assumed to be a constant of 50% due to the difficulty in separating live and dead components of the fine roots . Total plant P uptake was estimated as the net difference between plant P-demand and plant P-resorption fluxes. Overstorey and understorey P-use efficiency to support the respective photosynthesis were calculated as the respective gross primary production divided by their corresponding leaf P-production flux. The plant P-use efficiency was estimated as the total plant P demand over the net primary production of both overstorey and understorey vegetation, because fine-root production includes contributions from both overstorey and understorey plants. The plant P MRT (years) was calculated as the standing vegetation P pool (excluding the heartwood and coarse root) over the plant P-uptake flux.
Soil P pools were determined based on soil collected from four 2 m × 2 m subplots within each of the six FACE plots. A grid system was assigned to each soil subplot, and sampling locations were noted to ensure the same location was not sampled more than once. At the time of sampling, three soil cores (3 cm diameter) were collected from each sample location and pooled into one composite sample for each subplot. Pooled soils were sieved (<2 mm). Soils were repeatedly sampled over the top 10 cm between 2013 and 2015, once for the 10–30 cm depth in 2013 and once in 2017 for 0–10 cm, 10–30 cm and 30 cm to a hard clay layer located at variable depth across the site (median 56 cm, range 35–85 cm). P pools were calculated on the basis of the measured P concentrations and mean soil bulk density measures at each depth class for each FACE plot (Extended Data Table ). The pool size for 2017 up to 60 cm depth was calculated using the concentration measured below 30 cm and to the clay layer. In soil from 2013 to 2015, the total soil P concentration was determined on finely milled (MM 400, Retsche) oven-dried (40 °C, 48 h) soils after aqua regia digestion and inductively coupled plasma mass spectrometry (ICP-MS) analysis (Environmental Analysis Laboratory, Southern Cross University). For 2017 soils, total, organic and inorganic soil P were determined by two methods. Using an approach described previously , 1 g of oven-dried (40 °C, 48 h) finely ground (MM 400, Retsche) soil was either ignited for 1 h at 550 °C (for total P) or extracted untreated (for inorganic P) for 16 h with 25 ml of 0.5 M H 2 SO 4 and the extracts passed through a 0.2 µm filter before colorimetric analysis . Organic P was determined as the difference between total P and inorganic P. As the method has been shown to overestimate organic P in highly weathered soils , we also used a previously described approach whereby 2 g of milled soil was extracted for 16 h with 30 ml in a 0.25 M NaOH + 0.05 M EDTA solution. After passing the extract through a 0.2 µm filter, the filtrates were analysed for total P concentration (ICP-MS) and inorganic P using the Malachite Green method and organic P was computed as the difference between total P and inorganic P. Values obtained for total P, inorganic P and organic P that were determined using both methods were similar and values for the respective P classes were averaged across methods. Total P values determined in 2017 were also similar to those obtained previously using the aqua regia method. To determine operationally defined soil P pools, soils collected from the top 10 cm of the soil in 2013 were sequentially extracted with 1 M NH 4 Cl, 0.5 M NaHCO 3 (pH 8.5), 0.1 M NaOH, 1 M HCl and 0.1 M NaOH according to a modified Hedley fractionation method . Each extract was analysed colorimetrically for determination of inorganic P using the Malachite Green method . To determine organic P, a subsample of extracts (2.5 ml) was digested with 0.55 ml 11 M H 2 SO 4 and 1.0 ml 50% ammonium peroxydisulfate as previously described , and inorganic P determined as before. Organic P was defined as the difference in inorganic P between digested and undigested samples. The occluded P was defined as the total P (as determined by aqua regia, described above) minus the sum of all other P concentrations . We used the Hedley fractionation method to discriminate soil P pools of different chemical extractability as a potential indicator of soil P bioavailability. Notably, this method may introduce artifacts in certain chemical fraction estimates . We therefore took a conservative approach by grouping less-available soil P fractions as a residual P pool, and reported the more easily extractable fractions separately, which we operationally defined as exchangeable inorganic P, exchangeable organic P and moderately labile organic P. The extractable inorganic P pool (that is, labile P i ) was determined quarterly between 2013 and 2015 on 0–10 cm layer soils using the Bray-1 P extraction , method, and once in 2017 (0–10 cm, 10–30 cm and 30–60 cm) . Phosphate concentrations in soil extracts were determined colorimetrically using the molybdate blue assay (AQ2 Discrete Analyzer SEAL Analytical) using an established method for available P (EPA-118-A rev.5). The proportion of change in concentration across depth in 2017 was applied to the averaged 2013–2015 measurements to estimate the concentrations across 10–30 cm and 30–60 cm depths. The microbial P pool, comprising bacteria, archaea, protozoa and fungi, was assessed within 2 days of sampling using chloroform fumigation extraction , and estimated quarterly between 2014 and 2015 for 0–10 cm and once in 2017 (0–10 cm, 10–30 cm and 30–60 cm). In brief, 3.75 g soil was fumigated in the dark for 24 h. Phosphorus was extracted from fumigated and unfumigated samples using the Bray-1 P extraction method as above. Microbial biomass P was determined as the difference in extractable P between fumigated and unfumigated samples. A conversion factor of 0.4 was used to calculate the microbial P pool . The proportion of change in microbial P concentration across depth measured in 2017 was applied to the averaged 2014–2015 measurements per plot (0–10 cm) to estimate the concentrations across 10–30 cm and 30–60 cm depths.
The soil net P-mineralization flux (gross mineralization minus gross immobilization) was determined in situ at the 0–10 cm depth on a quarterly basis as the change in phosphate concentration between two timepoints between January 2013 and January 2016 using PVC pipes . Soil net P-mineralization flux estimated based using this method is subject to uncertainty because it does not include contributions from plant roots that could potentially affect the C input and P exchange in the PVC pipes. However, the net soil P mineralization flux was corroborated by estimates from other measurements that integrate all plant and microbial processes, namely microbial P, phosphatase enzyme, available P concentrations and soil P concentrations measured using the Hedley fractionation method. To estimate net P-mineralization fluxes in deeper soil layers (10–30 cm, 30–60 cm), we assumed that the net mineralization activity was proportional to organic matter content, microbial biomass and fine-root biomass, and applied the proportion of change of measured soil C, microbial C and fine-root C across depth for each plot to the 0–10 cm measured net P-mineralization flux. The values obtained with the three variables were very similar, differing by 4.5%; we therefore report values estimated using soil C only. The soil P-leaching flux was estimated based on phosphate concentration collected in deeper soils (35–75 cm) using a water suction lysimeter , assuming a water efflux of 20 ml m −2 d −1 through drainage at the site. The atmospheric P-deposition flux at the site was extracted from a gridded dataset .
We calculated treatment averages and their s.d. based on the plot-level data ( n = 3). We calculated the s.d. for the aggregated pools and fluxes (for example, total plant P pool) by summing the individual components that constitute the aggregated pool and flux for each plot and computing the s.d. within each treatment ( n = 3). The CO 2 treatment effect was calculated as the net difference between eCO 2 and aCO 2 plots, with its s.d. (SD eff ) calculated by pooling the s.d. values of the aCO 2 and eCO 2 treatments (SD amb and SD ele , respectively) as follows: [12pt]{minimal}
$${{}}_{{}}=}}_{{}}^{2}+\,{{}}_{{}}^{2}}{2}}$$ SD eff = SD amb 2 + SD ele 2 2 Owing to long-term environmental fluctuation and spatial heterogeneity across treatment plots and the limited number of replication in large-scale field-based experiment , , , , the classic dichotomous approach of statistical test based on P value alone may underestimate the more subtle responses in manipulative experiments such as EucFACE. We therefore used multiple analytical approaches to robustly quantify and interpret the CO 2 responses, including using confidence intervals to indicate the effect size , (Fig. and Extended Data Figs. and ), using linear mixed-effect models to report statistical results (Supplementary Information ), and using bootstrap resampling as a sensitivity test (Extended Data Figs. and , Extended Data Table and Supplementary Information ). Reporting the means and confidence intervals is a useful way of assessing uncertainties in data, which has been shown to be more effective to assess the relationships within data than the use of P values alone, regardless of the statistical significance , . We calculated the confidence interval for the CO 2 effect size (CI eff ) as: [12pt]{minimal}
$${{}}_{{}}={t}_{95}{{}}_{{}}_{1}}+_{2}}}$$ CI eff = t 95 SD eff 1 n 1 + 1 n 2 Where t 95 is the critical value of the t- distribution at 95% with ( n 1 + n 2 −2) d.f., and n 1 = n 2 = 3 is the sample size for each CO 2 treatment. Taking the same approach, we also calculated the confidence intervals at 85% and 75%, respectively, to demonstrate the decreasing level of confidence in the reported CO 2 effect size. For the mean CO 2 effect size to be statistically significant from the null hypothesis at the 95%, 85% and 75% confidence levels, the corresponding confidence intervals must not overlap with zero. To investigate the main CO 2 effect statistically and how temporal fluctuation may have affected the CO 2 effect (or the lack thereof), we built a linear mixed-effect model with CO 2 treatment, year and their interaction as fixed factors and treatment plot as a random factor. We followed the conventional approach to interpret these results (that is, P -value cut-off < 0.05 as an indication for statistical significance between the ambient and elevated CO 2 treatment plots). The results of the linear mixed-effect models indicate a generally consistent main CO 2 effect across time (Supplementary Information ). We therefore reported only the main CO 2 effect based on the time-averaged plot-level data in the main text, and took an evidence-based approach to interpret the statistical significance of these results. Moreover, to quantify the uncertainties associated with temporal fluctuations in the measurements, we developed a bootstrapping method by randomly resampling datapoints from each CO 2 treatment 1,000 times without ignoring the temporal fluctuation in the measurements. This approach can be considered as a sensitivity test. We then estimated the 95%, 85% and 75% confidence intervals of the bootstrapped CO 2 effect based on the resampled data . Results of this analysis suggest that the uncertainties associated with temporal fluctuations in the data do not affect the findings described in the main text (Extended Data Figs. – and Supplementary Information ).
Further information on research design is available in the linked to this article.
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41586-024-07491-0.
Supplementary Information Supplementary Information 1–4, including Supplementary Tables 1–5 and Supplementary Figs. 1 and 2. Reporting Summary
|
Addressing childcare as a barrier to healthcare access through community partnerships in a large public health system | 02f63666-e11a-4aa2-a4f6-b47c48a0ea47 | 9582322 | Gynaecology[mh] | Lack of childcare is now being recognised as a significant barrier to accessing medical care. Women’s health surveys indicate that problems getting childcare are reported more frequently in low-income women creating a disparity in their access to medical services. Collaboration with community-based organisations to address lack of childcare creates a way for patients to access medical care instead of foregoing care, and a no-patient cost campus childcare centre was used by patients when made available. Electronic means of communication between community-based childcare staff and clinic personnel provides a transparent and efficient platform during a patient’s medical care. Partnerships between healthcare systems and community-based childcare organisations can be leveraged to alleviate the access to care barrier that a lack of childcare resources presents. Childcare provision to facilitate attendance at medical appointments may provide a solution to address barriers to care for parents or caregivers, particularly when they are no cost, integrated into clinical workflows and in a location nearby and convenient their medical appointments. Women face unique barriers to healthcare. While both women and men are impacted by health costs, the burden on women is higher because of their lower wages, more limited financial assets and higher poverty. These inequities result in women being more likely than men to have delayed or forgone healthcare. In a national sample of 2751 women ages 18–64, the 2017 Kaiser Health Survey found that compared with men, women were more likely to delay or go without healthcare with up to 26% of women reporting putting off or postponing preventative services, skipping recommended tests/treatments and cutting or skipping medications because of costs. In another large survey, 45% of women delayed or did not receive cancer screenings or dental care because of costs versus 36% of men. These inequities, compounded with gender roles and expectations, present unique burdens on women, and while costs of care are important, consideration of additional burdens women face is critical to finding solutions towards equity. Understanding women’s social determinants of health is imperative to meaningfully address challenges imposed in this population. Logistical barriers related to women’s roles as caretakers and employees have also been identified to impact access to care. The Kaiser survey of women found that 24% of women could not find time to go to the doctor, 23% could not take time off from work and 14% of women missed or delayed their own healthcare because of lack of childcare. While these barriers impacted all women, low-income women were more likely to experience both childcare problems and delays in obtaining healthcare. Health system employees had anecdotally noted that patients were frequently attending healthcare visits accompanied by small children or reporting that missed appointments were due to lack of childcare. This resulted in the conduct of a survey of 300 reproductive-aged women seeking healthcare services at Parkland Health in 2019, which found that over half of women reported missing or delaying care in the past year. Through structured interviews of women in ambulatory care settings, 52.7% of survey respondents cited childcare as the primary reason for missing healthcare appointments. Of those who reported delaying care, 38.2% delayed care for 1–6 months and 30.9% for 1 week to 1 month. A percentage of 86.8% missed checkups and well visits, and 31.8% missed problem visits like specialty appointments and oncological care. Lack of childcare (52.7%) was the most frequently cited reason for missing care followed by lack of transportation (32.8%) and lack of insurance (25.2%). As a result, Parkland Health engaged in a partnership with a non-profit organisation, Mommies in Need, to address this critical need for childcare for patients to attend their medical appointments. Herein we describe this healthcare improvement initiative at a public health system that aimed to increase access to care by removing the lack of childcare as a barrier. Our primary objective was to measure patient utilisation of childcare services during medical appointments and secondarily, to disseminate implementation procedures to organisations aiming for similar collaborations with community partners. The SQUIRE 2.0 guidelines were used for reporting. Parkland Health and the Dallas-based non-profit community-based organisation (CBO) called Mommies in Need, forged a collaborative initiative to provide childcare for caregivers’ children while the caregivers receive medical care in the public health system. Mommies in Need offers in-home, virtual and onsite childcare services and specialises in childcare for caregivers with medical needs. Approval from the Parkland Health’s leadership and Board of Managers kickstarted the organisational planning for the initiative in 2019. The childcare centre was constructed in a building that the public health system owned located within walking distance from the main hospital campus and clinics. This building was leased to Mommies in Need for a term of 5 years. The centre opened in November 2020. Patient and public involvement Prior to partnering with Mommies in Need, existing patients were surveyed, and experiences were collected to gather feedback on reasons why women miss clinic appointments. In addition, 1 week of missed appointments in proposed pilot clinics were calculated to estimate potential volume of childcare appointments that would be needed. Feedback was used to create a proposal to the health system executive leadership. During implementation, patient input was used to modify patient-facing materials promoting the service and to identify areas for expand services. The primary outcome measured patient utilisation of childcare services during medical appointments because it reflected the unmet need for childcare. Patients had the opportunity to self-refer for childcare services at anytime and were not obligated to use the childcare services if they indicated a need or enrolled in services. Ethical considerations Throughout the implementation of this initiative, patients’ privacy and cultural belief systems about who can care for their children were discussed. Patients’ demographics and referring clinic site (including their utilisation of childcare services) was used to further expand the programme to new clinics and tailor outreach to prospective patients in a way that fostered trust and respect for patient privacy. Patient’s diagnosis information was not required to obtain childcare services, and all Mommies in Need employees were required to complete institutional HIPAA (Health Insurance Portability and Accountability Act) and compliance training. Organisational planning Team An executive sponsor was assigned for oversight, and a project manager led team meetings to ensure continuity between each component of the initiative. Membership to team meetings included personnel from the following Parkland Health departments in addition to Mommies in Need leadership: strategy and integration, the centre of innovation and value at parkland, facilities, the police department (parking/shuttle services), information technology and external affairs. Costs Costs to fund the programme were split between Parkland Health and Mommies in Need. The health system provided the space for the childcare centre within the system’s campus to facilitate geographic convenience and access to childcare services for patients. To have this space readily available for this use, the public health system made site upgrades to the existing building. Parking lot, security equipment and technical equipment (including computers and telephones) were provided by the health system. A no-cost lease, maintenance costs including environmental services, utilities and security were provided by Parkland Health for an initial agreement of 5 years. Mommies in Need was responsible for the staffing, day-to-day management and operations of the centre, licencing and liability insurance related to the childcare centre and staff, supplies, equipment, furniture, build-out design fees and construction costs related to such improvements . These costs were covered through the non-profit’s fundraising and charitable-giving efforts. All childcare provided was at no cost to patients. Logistics All employees and volunteers of Mommies in Need were set up as non-patient care non-employees, working in areas where patient care is not performed. According to the health system’s regulations, Mommies in Need Childcare Center employees received corporate training and badge access after the employee was cleared with background checks. Also, to facilitate the communication between the childcare centre and the health system’s outpatient clinics, Mommies in Need Childcare Center staff were provided limited access to the electronic medical record (EMR). This allowed for communications to be documented in the EMR and promoted a flow of information between the childcare centre staff and the health system clinical staff, for example, childcare staff would know where and when clinic appointments were scheduled that may require childcare. Mommies in Need offered two types of childcare programmes, with the only differences being the amount of time a child may attend, and the documentation required for enrolment. Both programmes followed policies and procedures set forth by the State of Texas and followed Minimum Standards for Childcare Centers. Pilot The initial rollout included the maternal fetal medicine, gynaecology, and medical oncology clinics, but later was expanded to most campus clinics including but not limited to palliative care, radiology, neonatal intensive care units and immunisation appointments. A series of meetings were conducted with operational and nursing leaders from each of the pilot clinics to promote referrals into the childcare programme and to understand the specific use cases and workflows unique to each clinic. Clinic schedules were used to determine the hours of operation for Mommies in Need that would best suit the needs of the patient population. Workflow integration A key element for the flow of information between Mommies in Need and healthcare staff was through the EMR. Workflows were designed to have minimal impact on clinical and business operations while leveraging EMR functionality to provide communication and transparency between the children at Mommies in Need and caregiver’s clinical care areas. The following were integrated into the EMR workflows: Healthcare staff referrals : electronic referrals were built into the EMR to allow any member of the healthcare team to refer patients for childcare when the need was identified. Clarifying documentation: no show and cancellation of appointment options were updated to include childcare as a reason for missed appointments throughout the organisation. Patient portal questionnaires were automatically sent to patients with missed or cancelled appointments to identify if childcare was a barrier. Direct to patient messaging : information about the no-cost childcare was built into the after-visit summary (AVS), informational pamphlets were made available in clinic and posters were placed in patient waiting areas. Electronic portal questionnaires were sent to patients who either had a cancelled or a missed appointment in select clinics asking if the appointment was missed due to childcare. Options for responses included: yes, no and not applicable. Patient chart flags : custom patient chart flags were built to indicate that a patient was enrolled into the programme and/or had childcare for an upcoming appointment. Each flag had a corresponding icon that displayed within the provider clinic schedule as well as daily appointment reports for Mommies in Need personnel. The flags also triggered inbox messaging or a mobile text if enrolled patients had new or cancelled appointments, checked out from an appointment or were admitted to the hospital from the clinic appointment. Reporting: daily reports were made available to Mommies in Need staff that provided upcoming appointment information so that future childcare needs could be anticipated and discussed with patients when picking up their children. Measures The primary measures of success included acceptance and utilisation of childcare services in the first 12 months (November 2020–October 2021) of facility opening including the number of families enrolled and frequency of childcare appointments made. Secondary measures were descriptive and included the mechanism through which patients were introduced to childcare services, types of appointments in which childcare was scheduled and demographic composition of the population in need of services. In addition, age, hours of care and number of children cared for were also captured. All outcomes reported were descriptive. Prior to partnering with Mommies in Need, existing patients were surveyed, and experiences were collected to gather feedback on reasons why women miss clinic appointments. In addition, 1 week of missed appointments in proposed pilot clinics were calculated to estimate potential volume of childcare appointments that would be needed. Feedback was used to create a proposal to the health system executive leadership. During implementation, patient input was used to modify patient-facing materials promoting the service and to identify areas for expand services. The primary outcome measured patient utilisation of childcare services during medical appointments because it reflected the unmet need for childcare. Patients had the opportunity to self-refer for childcare services at anytime and were not obligated to use the childcare services if they indicated a need or enrolled in services. Ethical considerations Throughout the implementation of this initiative, patients’ privacy and cultural belief systems about who can care for their children were discussed. Patients’ demographics and referring clinic site (including their utilisation of childcare services) was used to further expand the programme to new clinics and tailor outreach to prospective patients in a way that fostered trust and respect for patient privacy. Patient’s diagnosis information was not required to obtain childcare services, and all Mommies in Need employees were required to complete institutional HIPAA (Health Insurance Portability and Accountability Act) and compliance training. Throughout the implementation of this initiative, patients’ privacy and cultural belief systems about who can care for their children were discussed. Patients’ demographics and referring clinic site (including their utilisation of childcare services) was used to further expand the programme to new clinics and tailor outreach to prospective patients in a way that fostered trust and respect for patient privacy. Patient’s diagnosis information was not required to obtain childcare services, and all Mommies in Need employees were required to complete institutional HIPAA (Health Insurance Portability and Accountability Act) and compliance training. Team An executive sponsor was assigned for oversight, and a project manager led team meetings to ensure continuity between each component of the initiative. Membership to team meetings included personnel from the following Parkland Health departments in addition to Mommies in Need leadership: strategy and integration, the centre of innovation and value at parkland, facilities, the police department (parking/shuttle services), information technology and external affairs. Costs Costs to fund the programme were split between Parkland Health and Mommies in Need. The health system provided the space for the childcare centre within the system’s campus to facilitate geographic convenience and access to childcare services for patients. To have this space readily available for this use, the public health system made site upgrades to the existing building. Parking lot, security equipment and technical equipment (including computers and telephones) were provided by the health system. A no-cost lease, maintenance costs including environmental services, utilities and security were provided by Parkland Health for an initial agreement of 5 years. Mommies in Need was responsible for the staffing, day-to-day management and operations of the centre, licencing and liability insurance related to the childcare centre and staff, supplies, equipment, furniture, build-out design fees and construction costs related to such improvements . These costs were covered through the non-profit’s fundraising and charitable-giving efforts. All childcare provided was at no cost to patients. Logistics All employees and volunteers of Mommies in Need were set up as non-patient care non-employees, working in areas where patient care is not performed. According to the health system’s regulations, Mommies in Need Childcare Center employees received corporate training and badge access after the employee was cleared with background checks. Also, to facilitate the communication between the childcare centre and the health system’s outpatient clinics, Mommies in Need Childcare Center staff were provided limited access to the electronic medical record (EMR). This allowed for communications to be documented in the EMR and promoted a flow of information between the childcare centre staff and the health system clinical staff, for example, childcare staff would know where and when clinic appointments were scheduled that may require childcare. Mommies in Need offered two types of childcare programmes, with the only differences being the amount of time a child may attend, and the documentation required for enrolment. Both programmes followed policies and procedures set forth by the State of Texas and followed Minimum Standards for Childcare Centers. An executive sponsor was assigned for oversight, and a project manager led team meetings to ensure continuity between each component of the initiative. Membership to team meetings included personnel from the following Parkland Health departments in addition to Mommies in Need leadership: strategy and integration, the centre of innovation and value at parkland, facilities, the police department (parking/shuttle services), information technology and external affairs. Costs to fund the programme were split between Parkland Health and Mommies in Need. The health system provided the space for the childcare centre within the system’s campus to facilitate geographic convenience and access to childcare services for patients. To have this space readily available for this use, the public health system made site upgrades to the existing building. Parking lot, security equipment and technical equipment (including computers and telephones) were provided by the health system. A no-cost lease, maintenance costs including environmental services, utilities and security were provided by Parkland Health for an initial agreement of 5 years. Mommies in Need was responsible for the staffing, day-to-day management and operations of the centre, licencing and liability insurance related to the childcare centre and staff, supplies, equipment, furniture, build-out design fees and construction costs related to such improvements . These costs were covered through the non-profit’s fundraising and charitable-giving efforts. All childcare provided was at no cost to patients. All employees and volunteers of Mommies in Need were set up as non-patient care non-employees, working in areas where patient care is not performed. According to the health system’s regulations, Mommies in Need Childcare Center employees received corporate training and badge access after the employee was cleared with background checks. Also, to facilitate the communication between the childcare centre and the health system’s outpatient clinics, Mommies in Need Childcare Center staff were provided limited access to the electronic medical record (EMR). This allowed for communications to be documented in the EMR and promoted a flow of information between the childcare centre staff and the health system clinical staff, for example, childcare staff would know where and when clinic appointments were scheduled that may require childcare. Mommies in Need offered two types of childcare programmes, with the only differences being the amount of time a child may attend, and the documentation required for enrolment. Both programmes followed policies and procedures set forth by the State of Texas and followed Minimum Standards for Childcare Centers. The initial rollout included the maternal fetal medicine, gynaecology, and medical oncology clinics, but later was expanded to most campus clinics including but not limited to palliative care, radiology, neonatal intensive care units and immunisation appointments. A series of meetings were conducted with operational and nursing leaders from each of the pilot clinics to promote referrals into the childcare programme and to understand the specific use cases and workflows unique to each clinic. Clinic schedules were used to determine the hours of operation for Mommies in Need that would best suit the needs of the patient population. Workflow integration A key element for the flow of information between Mommies in Need and healthcare staff was through the EMR. Workflows were designed to have minimal impact on clinical and business operations while leveraging EMR functionality to provide communication and transparency between the children at Mommies in Need and caregiver’s clinical care areas. The following were integrated into the EMR workflows: Healthcare staff referrals : electronic referrals were built into the EMR to allow any member of the healthcare team to refer patients for childcare when the need was identified. Clarifying documentation: no show and cancellation of appointment options were updated to include childcare as a reason for missed appointments throughout the organisation. Patient portal questionnaires were automatically sent to patients with missed or cancelled appointments to identify if childcare was a barrier. Direct to patient messaging : information about the no-cost childcare was built into the after-visit summary (AVS), informational pamphlets were made available in clinic and posters were placed in patient waiting areas. Electronic portal questionnaires were sent to patients who either had a cancelled or a missed appointment in select clinics asking if the appointment was missed due to childcare. Options for responses included: yes, no and not applicable. Patient chart flags : custom patient chart flags were built to indicate that a patient was enrolled into the programme and/or had childcare for an upcoming appointment. Each flag had a corresponding icon that displayed within the provider clinic schedule as well as daily appointment reports for Mommies in Need personnel. The flags also triggered inbox messaging or a mobile text if enrolled patients had new or cancelled appointments, checked out from an appointment or were admitted to the hospital from the clinic appointment. Reporting: daily reports were made available to Mommies in Need staff that provided upcoming appointment information so that future childcare needs could be anticipated and discussed with patients when picking up their children. A key element for the flow of information between Mommies in Need and healthcare staff was through the EMR. Workflows were designed to have minimal impact on clinical and business operations while leveraging EMR functionality to provide communication and transparency between the children at Mommies in Need and caregiver’s clinical care areas. The following were integrated into the EMR workflows: Healthcare staff referrals : electronic referrals were built into the EMR to allow any member of the healthcare team to refer patients for childcare when the need was identified. Clarifying documentation: no show and cancellation of appointment options were updated to include childcare as a reason for missed appointments throughout the organisation. Patient portal questionnaires were automatically sent to patients with missed or cancelled appointments to identify if childcare was a barrier. Direct to patient messaging : information about the no-cost childcare was built into the after-visit summary (AVS), informational pamphlets were made available in clinic and posters were placed in patient waiting areas. Electronic portal questionnaires were sent to patients who either had a cancelled or a missed appointment in select clinics asking if the appointment was missed due to childcare. Options for responses included: yes, no and not applicable. Patient chart flags : custom patient chart flags were built to indicate that a patient was enrolled into the programme and/or had childcare for an upcoming appointment. Each flag had a corresponding icon that displayed within the provider clinic schedule as well as daily appointment reports for Mommies in Need personnel. The flags also triggered inbox messaging or a mobile text if enrolled patients had new or cancelled appointments, checked out from an appointment or were admitted to the hospital from the clinic appointment. Reporting: daily reports were made available to Mommies in Need staff that provided upcoming appointment information so that future childcare needs could be anticipated and discussed with patients when picking up their children. The primary measures of success included acceptance and utilisation of childcare services in the first 12 months (November 2020–October 2021) of facility opening including the number of families enrolled and frequency of childcare appointments made. Secondary measures were descriptive and included the mechanism through which patients were introduced to childcare services, types of appointments in which childcare was scheduled and demographic composition of the population in need of services. In addition, age, hours of care and number of children cared for were also captured. All outcomes reported were descriptive. In the first 12 months, there were 175 families enrolled into the childcare programme run by Mommies in Need. Patients seeking childcare were primarily female with an average age of 31.8 and 29% (51/175) indicated Spanish as their primary language. The primary appointments booked for childcare services were from the obstetrics service followed by gynaecological services. Not all families who enrolled for services went on to schedule childcare (81% (142/175) scheduled appointments) for unknown reasons. There were 23 enrolled families for which childcare was scheduled; however, did not use the childcare service. Therefore, a total of 119 families both enrolled and used childcare services in the first year . Over the course of 1 year, 637 childcare appointments were made, and 482 childcare appointments were completed for 191 children with an average age of 3.6 (±2.5) years . The average age of children followed school day patterns with older children attending during the summer months and during school breaks. A total of 3136 childcare hours were provided by Mommies in Need. Most patients were self-referred or verbally referred and learnt about the service through waiting room posters, word of mouth, AVS paperwork or news outlets. However, 53 patients were electronically referred, of which 18 enrolled in services. Thirty-four per cent (631/1833) of patients who received electronic portal questionnaires indicated that childcare was the reason for their missed appointment and 27 families subsequently enrolled their children for future childcare based off the outreach generated from the missed appointment questionnaire. There were several findings in the first 12 months that required adjustments to the original workflow. Once the childcare programme was socialised throughout the organisation, the need to expand the participating clinics was apparent. Other expanded services beyond caregiver clinic appointments included expanding eligibility for childcare for siblings of neonates in the neonatal intensive care unit so parents could spend time with their critically ill newborn. Also, some EMR triggers (eg, text notifications for checkout and admissions) either did not work outside of testing environments or varied by clinic and alternative methods needed to be implemented. Finally, full review of patient responses to patient portal messages asking about missed appointments indicated that some patients were sensitive to any verbiage that indicated they ‘no showed’ or ‘cancelled’ an appointment. We modified the trigger for the questionnaire to eliminate cancelled appointments and changed the questionnaire wording. Within the first year of opening, the centre doubled new child enrollments , effectively increasing services to more patients in need of childcare. Similar to communities across the nation, much work addressing social determinants of health such as access to health insurance, transportation, access to providers in local communities and trust in the health system have been addressed by the county health system. With a focus on the specific needs of women, we identified childcare need as a significant barrier to accessing care for women of reproductive age and noted this to be the most cited barrier by the population that this health system serves. By quantifying this need through structured interviews with women accessing health services, we were able to begin to formulate new and innovative solutions to address this unmet need. The local philanthropic community has long supported and helped to advance the public health mission of Parkland Health, and Mommies in Need has been working to address lack of childcare as a barrier to healthcare since 2014. By partnering with Mommies in Need, we have been able to test the hypothesis that by addressing the childcare needs of our patients through the delivery of no-cost, high-quality childcare on site at the medical campus, we will be able to further support the health of our patients and thereby our communities. It should be noted that prior to this health system/CBO collaboration, Mommies in Need provided childcare for parents and guardians experiencing a health crisis primarily with its in-home and virtual programmes. Thus, this partnership was developed to be site specific and tailored to the needs of our patients while harnessing the expertise of the Mommies in Need organisation. While efforts to refine the processes and to improve access to these new services are ongoing, our early experience points to the need for health systems to acknowledge the unique barriers and stressors that specific patient populations, in this case mothers and caregivers of younger children, experience. Work examining the health and social impact, costs and clinical outcomes of this solution are ongoing with particular focus on examining the building of trust in the health system, resource utilisation, as well as health outcomes associated with the improved ability to attend healthcare appointments and follow-up with treatment recommendations. Limitations of this quality initiative are that we were unable to directly determine if utilisation of childcare services impacted missed medical appointments and health-seeking behaviour. We found that over 20% of requests for childcare were not directly associated with a medical appointment . In addition, of the 631 patients that indicated childcare was the reason they missed their appointment through the patient portal, only 27 of these enrolled in childcare services when offered. This finding is hypothesis generating and may indicate that additional social determinants are affecting health-seeking behaviour in this population. These limitations should be further investigated in future controlled (non-observational) studies at which time the association between providing childcare and health-seeking behaviour can be made. This report is limited to showing the feasibility of partnering with a CBO and describing utilisation of this service within a health system. Improving the health of our patients and our communities requires innovative ways of examining, defining and addressing barriers to healthcare. These approaches include listening to, then redesigning programmes to address the patient-reported barriers and stressors. This past year, during the COVID-19 pandemic, society was forced to take note of the many unacknowledged, uncompensated responsibilities that primary caregivers take on and the societal impact of the current tenuous support system. Our local solution was to help ease the burden of childcare for the time it takes patients to access needed health services with the goal of improving the health of our patients. To our knowledge, this is the first of its kind partnership between a non-profit organisation and public health system in the USA to implement an on-site, hospital-based drop-in childcare centre for patients. By sharing the steps required for this initiative, our hope is to allow similar organisations to consider and potentially replicate such an intervention. |
Sustained bacterial N | 9d02d5f1-539c-4244-a8b6-3b218aed871e | 11096178 | Microbiology[mh] | pH is a key parameter controlling soil biogeochemistry, but soil acidification, a natural process accelerated by the reliance of synthetic nitrogen fertilizer, the growth of legumes, and acidic precipitation/deposition, plagues regions around the world . Biological processes fix about 180 Tg N per year and conventional agriculture introduces more than 100 Tg N of chemically fixed N each year . N input accelerates soil N cycling resulting in increased formation of N 2 O, a compound linked to ozone depletion and climate change , , as well as to the inhibition of biogeochemical processes such as methanogenesis, mercury methylation, and reductive dechlorination – . The rise in global N 2 O emissions indicates an imbalance between N 2 O formation versus consumption, which has been attributed to the functionality of the resident microbiome and environmental variables including the availability of electron donors for N oxide reduction – , the concentrations of N oxyanions , oxygen content , , copper availability , , and pH . The reduction of N 2 O to environmentally benign N 2 appears particularly susceptible to acidic pH, and acidic environments are generally considered N 2 O emitters – . A few studies reported N 2 O consumption in denitrifying soil (slurry) microcosms with pH values below 5 , , ; however, soil heterogeneity and associated microscale patchiness of pH conditions, as well as pH increases during the incubation, make generalized conclusions untenable , . Attempts with denitrifying enrichment and axenic cultures derived from soil have thus far failed to demonstrate growth-linked N 2 O reduction and associated sustainability of such a process under acidic (pH < 6) conditions – . The only known sink for N 2 O are microorganisms expressing N 2 O reductase (NosZ), a periplasmic, copper-containing enzyme that catalyzes the conversion of N 2 O to environmentally benign dinitrogen (N 2 ). NosZ expression and proteomics studies with the model denitrifier Paracoccus denitrificans suggested that acidic pH interferes with NosZ maturation (e.g., copper incorporation into two dinuclear centers, Cu Z and Cu A ) , , a phenomenon also observed in enrichment cultures harboring diverse N 2 O-reducing bacteria . Studies with Marinobacter hydrocarbonoclasticus found active NosZ with a Cu Z center in the 4Cu2S form in cells grown at pH 7.5, but observed a catalytically inactive NosZ with the Cu Z center in the form 4Cu1S when the bacterium was grown at pH 6.5 . The inability to synthesize functional canonical NosZ serves as explanation for increased N 2 O emissions from acidic pH; however, this paradigm cannot explain N 2 O consumption in acidic soils , . A metagenome-based analysis of soil microbial communities in the Luquillo Experimental Forest (El Yunque National Forest, Puerto Rico) provided evidence that N 2 O-reducing soil microorganisms are not limited to circumneutral pH soils and exist in strongly acidic (pH 4.5-5.0) tropical forest soils . Anoxic microcosms established with acidic Luquillo Experimental Forest soil and maintained at pH 4.5 demonstrated sustained N 2 O reduction activity, and comparative metagenomic studies implicated strict anaerobic taxa harboring clade II nosZ , but lacking nitrite reductase genes ( nirS , nirK ), in N 2 O reduction . While the effects of pH on facultative anaerobic, denitrifying species have been studied , , , efforts to explore strict anaerobic non-denitrifiers capable of N 2 O reduction are largely lacking. In this work, we integrate cultivation and omics approaches to characterize a non-denitrifying two-species co-culture derived from acidic tropical soil. The co-culture comprises an acidophilic, anaerobic bacterium, Desulfosporosinus nitrosoreducens , that couples respiratory N 2 O reduction with hydrogen oxidation at pH 4.5 – 6.0, but not at or above pH 7.
A consortium consisting of two species reduces N 2 O at pH 4.5 Microcosms established with El Verde tropical soil amended with lactate consumed N 2 O at pH 4.5; however, N 2 O-reducing activity was lost upon transfers to vessels with fresh medium containing lactate. The addition of acetate, formate (1 or 5 mM each), and CO 2 (208 µmol, 2.08 mM nominal), propionate (5 mM), or yeast extract (0.10 – 10 g L −1 ) did not stimulate N 2 O reduction in pH 4.5 transfer cultures. Limited N 2 O consumption was observed in transfer cultures amended with 2.5 mM pyruvate, but complete removal of N 2 O required the addition of H 2 or formate. In transfer cultures with H 2 or formate, but lacking pyruvate, N 2 O was not consumed. Subsequent transfers in completely synthetic basal salt medium amended with both pyruvate and H 2 yielded a robust enrichment culture that consumed N 2 O at pH 4.5 (Fig. ). Phenotypic characterization illustrated that pyruvate utilization was independent of N 2 O, while N 2 O reduction only commenced following pyruvate consumption. The fermentation of pyruvate yielded acetate, CO 2 , and formate as measurable products, with formate and external H 2 serving as electron donors for subsequent N 2 O reduction (Supplementary Fig. and Note ). The fermentation of pyruvate resulted in pH increases, with the magnitude of the medium pH change proportional to the initial pyruvate concentration. The fermentation of 2.5 mM pyruvate increased the medium pH by 0.53 ± 0.03 pH units whereas a lower pH increase of 0.22 ± 0.02 pH units was observed with 0.5 mM pyruvate (Supplementary Fig. ). N 2 O reduction was also observed in cultures that received 5 mM glucose. N 2 O reduction was oxygen sensitive and N 2 O was not consumed in medium without reductant (i.e., cysteine or dithiothreitol). Microbial community profiling of El Verde soil and solids-free transfer cultures documented effective enrichment in defined pH 4.5 medium amended with pyruvate, H 2 , and N 2 O (Fig. B and Supplementary Note ). Following nine consecutive transfers, Serratia and Desulfosporosinus each contributed about half of the 16S rRNA amplicon sequences (49.7% and 50.2%, respectively), and less than 0.05% of the sequences represented Planctomycetota , Lachnoclostridium , Caproiciproducens . Deep shotgun metagenome sequencing performed on a 15 th transfer culture recovered two draft genomes representing the Serratia sp. and the Desulfosporosinus sp., accounting for more than 95% of the total short read fragments. All 16S rRNA genes associated with assembled contigs could be assigned to Serratia or Desulfosporosinus (Supplementary Fig. and Note ), indicating that the enrichment process yielded a consortium consisting of a Serratia sp. and a Desulfosporosinus sp., designated co-culture EV (El Verde). Efforts to recover the Serratia and Desulfosporosinus genomes from the original soil metagenome data sets via recruiting the soil metagenome fragments to the two genomes (Fig. ) were not successful, highlighting the effectiveness of the enrichment strategy. Redundancy-based analysis with Nonpareil revealed that the average covered species richness in the metagenome data set obtained from the 15 th transfer culture was 99.9%, much higher than what was achieved for the El Verde original soil inoculum (39.5%), suggesting the metagenome analysis of the original soil did not fully capture the resident microbial diversity. The application of 16S rRNA gene-targeted qPCR assays to DNA extracted from 9 th transfer N 2 O-reducing cultures revealed a bimodal growth pattern. During pyruvate fermentation (Phase I), the Serratia cell numbers increased nearly 1,000-fold from (2.3 ± 0.8) × 10 2 to (1.8 ± 0.2) × 10 5 cells mL −1 , followed by a 40-fold increase from (3.5 ± 1.5) × 10 4 to (1.2 ± 0.4) × 10 6 cells mL −1 of Desulfosporosinus cells during N 2 O reduction (Phase II) (Fig. ). In vessels without N 2 O, Desulfosporosinus cell numbers did not increase, indicating that growth of this population depended on the presence of N 2 O. Growth yields of (3.1 ± 0.11) × 10 8 cells mmol −1 of N 2 O and (7.0 ± 0.72) × 10 7 cells mmol −1 of pyruvate were determined for the Desulfosporosinus and the Serratia populations, respectively. The growth yield of Desulfosporosinus with N 2 O as electron acceptor is on par with growth yields reported for neutrophilic N 2 O-reducing bacteria with clade II nosZ under comparable growth conditions , . 16S rRNA gene amplicon sequencing performed on representative samples collected at the end of Phase I (day 7) and Phase II (day 18) confirmed a bimodal growth pattern. Sequences representing Serratia increased during Phase I and Desulfosporosinus sequences increased during Phase II (Fig. ). Taken together, the physiological characterization, qPCR, genomic, and amplicon sequencing results indicate that co-culture EV performs low pH N 2 O reduction, with a Serratia sp. fermenting pyruvate and a Desulfosporosinus sp. reducing N 2 O. Streaking aliquots of a 1:10-diluted 15 th co-culture suspension sample onto Tryptic Soy Agar (TSA) solid medium under an air headspace yielded an axenic Serratia sp., designated strain MF, capable of pyruvate fermentation. Despite extensive efforts, the N 2 O-reducing Desulfosporosinus sp. resisted isolation, presumably due to obligate interaction(s) with strain MF (see below and Supplementary Note ). Identification of auxotrophies To investigate the specific nutritional requirements of the Desulfosporosinus sp. in co-culture EV, untargeted metabolome analysis was conducted on supernatant collected from axenic Serratia sp. cultures growing with pyruvate and during N 2 O consumption (Phase II) following inoculation with co-culture EV (Fig. ). Peaks representing potential metabolites were searched against a custom library (Supplementary Dataset ) and 33 features could be assigned to known structures, including seven amino acids (alanine, glutamate, methionine, valine, leucine, aspartate, and tyrosine). Cystine, the oxidized derivative of the amino acid cysteine, was also detected; however, cystine or cysteine were not found in cultures where dithiothreitol (DTT) replaced cysteine as the reductant, suggesting that Serratia did not excrete either compound into the culture supernatant. Time series metabolome analysis of culture supernatant demonstrated dynamic changes to the amino acid profile following inoculation with the Serratia sp. and the Desulfosporosinus sp. (as co-culture EV) (Fig. ). Alanine, valine, leucine, and aspartate increased during pyruvate fermentation (Phase I) and were not consumed by the Serratia sp. (Supplementary Fig. ). Consumption of alanine, valine, leucine, and aspartate did occur following the inoculation of the Desulfosporosinus sp. (as co-culture EV) (Fig. ). These findings suggest that the N 2 O-reducing Desulfosporosinus sp. is an amino acid auxotroph, and a series of growth experiments explored if amino acid supplementation (Supplementary Table ) could substitute the requirement for pyruvate fermentation by the Serratia sp. for enabling N 2 O consumption by the Desulfosporosinus sp. The addition of individual amino acids ( n = 20) was not sufficient to initiate N 2 O reduction in pH 4.5 medium, as was the combination of alanine, valine, leucine, aspartate, and tyrosine. Incomplete N 2 O consumption (<20% of initial dose) was observed in cultures supplemented with the 5-amino acid combination plus methionine. N 2 O reduction and growth of the Desulfosporosinus sp. occurred without delay in cultures supplied with a 15-amino acid mixture (Fig. ). Omission of single amino acids from the 15-amino acid mixture led to incomplete N 2 O reduction, similar to what was observed with the 6-amino acid combination. Efforts to isolate the Desulfosporosinus sp. in medium without pyruvate but amended with amino acids were unsuccessful because of concomitant growth of the Serratia sp., as verified with qPCR. pH range of acidophilic N 2 O reduction by the Desulfosporosinus sp Growth assays with co-culture EV were performed to determine the pH range for N 2 O reduction. Co-culture EV reduced N 2 O at pH 4.5, 5.0 and 6.0, but not at pH 3.5, 7.0 and 8.0. pH 4.5 cultures exhibited about two times longer lag periods (i.e., 10 versus 5 days) prior to the onset of N 2 O consumption than cultures incubated at pH 5.0 or 6.0 (Supplementary Fig. ). In medium without amino acid supplementation, pyruvate fermentation was required for the initiation of N 2 O consumption (Fig. ), raising the question if pH impacts pyruvate fermentation by the Serratia sp., N 2 O reduction by the Desulfosporosinus sp., or both processes. Axenic Serratia sp. cultures fermented pyruvate over a pH range of 4.5 to 8.0, with the highest pyruvate consumption rates of 1.47 ± 0.04 mmol L −1 day −1 observed at pH 6.0 and 7.0, and the lowest rates measured at pH 4.5 (0.43 ± 0.05 mmol L −1 day −1 ) (Supplementary Fig. ). The N 2 O consumption rates in co-culture EV between pH 4.5 to 6.0 were similar and ranged from 0.24 ± 0.01 to 0.26 ± 0.01 mmol L −1 day −1 (Supplementary Fig. ). These findings suggest that pyruvate fermentation by Serratia sp., not N 2 O reduction by Desulfosporosinus sp., explains the extended lag periods observed at pH 4.5 (Supplementary Fig. ). Consistently, shorter lag phase for both N 2 O reduction and Desulfosporosinus growth were observed in co-culture EV amended with the amino acid mixture (Fig. ). Phylogenomic analysis Phylogenomic reconstruction based on concatenated alignment of 120 bacterial marker genes corroborated the affiliation of the N 2 O-reducing bacterium with the genus Desulfosporosinus (Fig. ). The genus Desulfosporosinus comprises strictly anaerobic, sulfate-reducing bacteria, and Desulfosporosinus acididurans strain SJ4 and Desulfosporosinus acidiphilus strain M1 were characterized as acidophilic sulfate reducers. Genome analysis revealed shared features between the N 2 O-reducing Desulfosporosinus sp. and characterized Desulfosporosinus spp. (Supplementary Note ). The N 2 O-reducing Desulfosporosinsus sp. in co-culture EV possesses the aprAB and dsrAB genes encoding adenylyl sulfate reductase and dissimilatory sulfate reductase, respectively, but lacks the sat gene encoding sulfate adenylyltransferase/sulfurylase. To provide experimental evidence that the N 2 O-reducing Desulfosporosinus sp. in co-culture EV lacks the ability to reduce sulfate, a hallmark feature of the genus Desulfosporosinus , comparative growth studies were performed. The N 2 O-reducing Desulfosporosinsus sp. in co-culture EV did not grow with sulfate as sole electron acceptor, consistent with an incomplete dissimilatory sulfate reduction pathway (Supplementary Fig. ). Desulfosporosinus acididurans strain D , a close relative of the N 2 O-reducing Desulfosporosinus sp. in co-culture EV, grew with sulfate in pH 5.5 medium, but did not grow with N 2 O as electron acceptor under the same incubation conditions (Supplementary Fig. ). These observations corroborate the genomic analysis that the N 2 O-reducing Desulfosporosinus sp. lacks the ability to perform dissimilatory sulfate reduction. Based on phylogenetic and physiologic features, the N 2 O-reducer in culture EV represents a novel Desulfosporosinus species, for which the name Desulfosporosinus nitrosoreducens strain PR is proposed ( https://seqco.de/i:32619 ). Genetic underpinning of N 2 O reduction in Desulfosporosinus nitrosoreducens strain PR The strain PR genome harbors a single nosZ gene affiliated with clade II (Fig. ). Independent branch placement of the strain PR NosZ on the clade II NosZ tree suggests an ancient divergence; a finding supported by NosZ Amino acid Identity (AI) relative to the Average Amino acid Identity (AAI) value of the closest matching NosZ-encoding genome. Specifically, comparisons between the proteins encoded on the genomes of Desulfosporosinus nitrosoreducens strain PR and Desulfosporosinus meridiei showed genus-level AAI relatedness (i.e., AAI 73.83%), which was significantly higher than the AI of the encoded NosZ (i.e., AI 44%), indicating fast evolution of this protein and/or horizontal nosZ acquisition from a distant relative (Figs. and ). The NosZ of Desulfosporosinus nitrosoreducens strain PR is slightly more similar (AI: 45%) to the NosZ of the distant relative Desulfotomaculum ruminis . Comparative analysis of the strain PR nos gene cluster with bacterial and archaeal counterparts corroborated characteristic clade II features, including a Sec translocation system, genes encoding cytochromes and an iron-sulfur protein, and a nosB gene located immediately downstream of nosZ (Fig. ). nosB encodes a transmembrane protein of unknown function and has been found on clade II, but not clade I nos clusters. The nos gene clusters of closely related taxa (e.g., Desulfosporosinus meridiei , Desulfitobacterium dichloroeliminans , Desulfitobacterium hafniense ) show similar organization; however, differences were observed in the nos gene cluster of Desulfosporosinus nitrosoreducens strain PR. Specifically, the genes encoding an iron-sulfur cluster protein and cytochromes precede nosZ in Desulfosporosinus meridiei , but are located downstream of two genes encoding proteins of unknown functions in strain PR (Fig. ). Of note, among the microbes with nos operons and included in the analyses, only Desulfosporosinus nitrosoreducens and Nitratiruptor labii , both with a clade II nos cluster, were experimentally validated to grow with N 2 O below pH 6. Genomic insights for a commensalistic relationship Functional annotation of the Serratia sp. and the Desulfosporosinus nitrosoreducens strain PR genomes was conducted to investigate the interspecies interactions (Fig. ). A btsT gene encoding a specific, high-affinity pyruvate/proton symporter and genes implicated in pyruvate fermentation (i.e., pflAB , poxB ) are present on the Serratia genome, but are missing on the strain PR genome, consistent with the physiological characterization results. f dhC genes encoding a formate transporter are present on both genomes, but only the strain PR genome harbors the fdh gene cluster encoding a formate dehydrogenase complex (Supplementary Fig. ), consistent with the observation that the Serratia sp. excretes formate, which strain PR utilizes as electron donor for N 2 O reduction (Supplementary Fig. ). Gene clusters encoding two different Ni/Fe-type hydrogenases (i.e., hyp and hya gene clusters) (Supplementary Fig. ) and a complete nos gene cluster (Fig. ) are present on the strain PR genome, but not on the Serratia sp. genome. Based on the KEGG and Uni-Prot databases , the Serratia genome contains the biosynthetic pathways (100% completeness) for aspartate, lysine, threonine, tryptophan, isoleucine, serine, leucine, valine, glutamate, arginine, proline, methionine, tyrosine, cysteine, and histidine. In contrast, only aspartate and glutamate biosynthesis are predicted to be complete on the strain PR genome, whereas the completeness level for biosynthetic pathways of other amino acids was below 80%. The Serratia genome encodes a complete set of TCA cycle enzymes, indicating the potential for forming aspartate and glutamate via transamination of oxaloacetate and α-ketoglutarate. In contrast, the strain PR genome lacks genes encoding malate dehydrogenase, citrate synthase, and aconitate hydratase, indicative of an incomplete TCA cycle. Therefore, strain PR is deficient of de novo formation of precursors for glutamate, aspartate, alanine, and related amino acids . A high-affinity amino acid transport system was found on the strain PR genome (Supplementary Fig. ), suggesting this bacterium can efficiently acquire extracellular amino acids to meet its nutritional requirements.
2 O at pH 4.5 Microcosms established with El Verde tropical soil amended with lactate consumed N 2 O at pH 4.5; however, N 2 O-reducing activity was lost upon transfers to vessels with fresh medium containing lactate. The addition of acetate, formate (1 or 5 mM each), and CO 2 (208 µmol, 2.08 mM nominal), propionate (5 mM), or yeast extract (0.10 – 10 g L −1 ) did not stimulate N 2 O reduction in pH 4.5 transfer cultures. Limited N 2 O consumption was observed in transfer cultures amended with 2.5 mM pyruvate, but complete removal of N 2 O required the addition of H 2 or formate. In transfer cultures with H 2 or formate, but lacking pyruvate, N 2 O was not consumed. Subsequent transfers in completely synthetic basal salt medium amended with both pyruvate and H 2 yielded a robust enrichment culture that consumed N 2 O at pH 4.5 (Fig. ). Phenotypic characterization illustrated that pyruvate utilization was independent of N 2 O, while N 2 O reduction only commenced following pyruvate consumption. The fermentation of pyruvate yielded acetate, CO 2 , and formate as measurable products, with formate and external H 2 serving as electron donors for subsequent N 2 O reduction (Supplementary Fig. and Note ). The fermentation of pyruvate resulted in pH increases, with the magnitude of the medium pH change proportional to the initial pyruvate concentration. The fermentation of 2.5 mM pyruvate increased the medium pH by 0.53 ± 0.03 pH units whereas a lower pH increase of 0.22 ± 0.02 pH units was observed with 0.5 mM pyruvate (Supplementary Fig. ). N 2 O reduction was also observed in cultures that received 5 mM glucose. N 2 O reduction was oxygen sensitive and N 2 O was not consumed in medium without reductant (i.e., cysteine or dithiothreitol). Microbial community profiling of El Verde soil and solids-free transfer cultures documented effective enrichment in defined pH 4.5 medium amended with pyruvate, H 2 , and N 2 O (Fig. B and Supplementary Note ). Following nine consecutive transfers, Serratia and Desulfosporosinus each contributed about half of the 16S rRNA amplicon sequences (49.7% and 50.2%, respectively), and less than 0.05% of the sequences represented Planctomycetota , Lachnoclostridium , Caproiciproducens . Deep shotgun metagenome sequencing performed on a 15 th transfer culture recovered two draft genomes representing the Serratia sp. and the Desulfosporosinus sp., accounting for more than 95% of the total short read fragments. All 16S rRNA genes associated with assembled contigs could be assigned to Serratia or Desulfosporosinus (Supplementary Fig. and Note ), indicating that the enrichment process yielded a consortium consisting of a Serratia sp. and a Desulfosporosinus sp., designated co-culture EV (El Verde). Efforts to recover the Serratia and Desulfosporosinus genomes from the original soil metagenome data sets via recruiting the soil metagenome fragments to the two genomes (Fig. ) were not successful, highlighting the effectiveness of the enrichment strategy. Redundancy-based analysis with Nonpareil revealed that the average covered species richness in the metagenome data set obtained from the 15 th transfer culture was 99.9%, much higher than what was achieved for the El Verde original soil inoculum (39.5%), suggesting the metagenome analysis of the original soil did not fully capture the resident microbial diversity. The application of 16S rRNA gene-targeted qPCR assays to DNA extracted from 9 th transfer N 2 O-reducing cultures revealed a bimodal growth pattern. During pyruvate fermentation (Phase I), the Serratia cell numbers increased nearly 1,000-fold from (2.3 ± 0.8) × 10 2 to (1.8 ± 0.2) × 10 5 cells mL −1 , followed by a 40-fold increase from (3.5 ± 1.5) × 10 4 to (1.2 ± 0.4) × 10 6 cells mL −1 of Desulfosporosinus cells during N 2 O reduction (Phase II) (Fig. ). In vessels without N 2 O, Desulfosporosinus cell numbers did not increase, indicating that growth of this population depended on the presence of N 2 O. Growth yields of (3.1 ± 0.11) × 10 8 cells mmol −1 of N 2 O and (7.0 ± 0.72) × 10 7 cells mmol −1 of pyruvate were determined for the Desulfosporosinus and the Serratia populations, respectively. The growth yield of Desulfosporosinus with N 2 O as electron acceptor is on par with growth yields reported for neutrophilic N 2 O-reducing bacteria with clade II nosZ under comparable growth conditions , . 16S rRNA gene amplicon sequencing performed on representative samples collected at the end of Phase I (day 7) and Phase II (day 18) confirmed a bimodal growth pattern. Sequences representing Serratia increased during Phase I and Desulfosporosinus sequences increased during Phase II (Fig. ). Taken together, the physiological characterization, qPCR, genomic, and amplicon sequencing results indicate that co-culture EV performs low pH N 2 O reduction, with a Serratia sp. fermenting pyruvate and a Desulfosporosinus sp. reducing N 2 O. Streaking aliquots of a 1:10-diluted 15 th co-culture suspension sample onto Tryptic Soy Agar (TSA) solid medium under an air headspace yielded an axenic Serratia sp., designated strain MF, capable of pyruvate fermentation. Despite extensive efforts, the N 2 O-reducing Desulfosporosinus sp. resisted isolation, presumably due to obligate interaction(s) with strain MF (see below and Supplementary Note ).
To investigate the specific nutritional requirements of the Desulfosporosinus sp. in co-culture EV, untargeted metabolome analysis was conducted on supernatant collected from axenic Serratia sp. cultures growing with pyruvate and during N 2 O consumption (Phase II) following inoculation with co-culture EV (Fig. ). Peaks representing potential metabolites were searched against a custom library (Supplementary Dataset ) and 33 features could be assigned to known structures, including seven amino acids (alanine, glutamate, methionine, valine, leucine, aspartate, and tyrosine). Cystine, the oxidized derivative of the amino acid cysteine, was also detected; however, cystine or cysteine were not found in cultures where dithiothreitol (DTT) replaced cysteine as the reductant, suggesting that Serratia did not excrete either compound into the culture supernatant. Time series metabolome analysis of culture supernatant demonstrated dynamic changes to the amino acid profile following inoculation with the Serratia sp. and the Desulfosporosinus sp. (as co-culture EV) (Fig. ). Alanine, valine, leucine, and aspartate increased during pyruvate fermentation (Phase I) and were not consumed by the Serratia sp. (Supplementary Fig. ). Consumption of alanine, valine, leucine, and aspartate did occur following the inoculation of the Desulfosporosinus sp. (as co-culture EV) (Fig. ). These findings suggest that the N 2 O-reducing Desulfosporosinus sp. is an amino acid auxotroph, and a series of growth experiments explored if amino acid supplementation (Supplementary Table ) could substitute the requirement for pyruvate fermentation by the Serratia sp. for enabling N 2 O consumption by the Desulfosporosinus sp. The addition of individual amino acids ( n = 20) was not sufficient to initiate N 2 O reduction in pH 4.5 medium, as was the combination of alanine, valine, leucine, aspartate, and tyrosine. Incomplete N 2 O consumption (<20% of initial dose) was observed in cultures supplemented with the 5-amino acid combination plus methionine. N 2 O reduction and growth of the Desulfosporosinus sp. occurred without delay in cultures supplied with a 15-amino acid mixture (Fig. ). Omission of single amino acids from the 15-amino acid mixture led to incomplete N 2 O reduction, similar to what was observed with the 6-amino acid combination. Efforts to isolate the Desulfosporosinus sp. in medium without pyruvate but amended with amino acids were unsuccessful because of concomitant growth of the Serratia sp., as verified with qPCR.
2 O reduction by the Desulfosporosinus sp Growth assays with co-culture EV were performed to determine the pH range for N 2 O reduction. Co-culture EV reduced N 2 O at pH 4.5, 5.0 and 6.0, but not at pH 3.5, 7.0 and 8.0. pH 4.5 cultures exhibited about two times longer lag periods (i.e., 10 versus 5 days) prior to the onset of N 2 O consumption than cultures incubated at pH 5.0 or 6.0 (Supplementary Fig. ). In medium without amino acid supplementation, pyruvate fermentation was required for the initiation of N 2 O consumption (Fig. ), raising the question if pH impacts pyruvate fermentation by the Serratia sp., N 2 O reduction by the Desulfosporosinus sp., or both processes. Axenic Serratia sp. cultures fermented pyruvate over a pH range of 4.5 to 8.0, with the highest pyruvate consumption rates of 1.47 ± 0.04 mmol L −1 day −1 observed at pH 6.0 and 7.0, and the lowest rates measured at pH 4.5 (0.43 ± 0.05 mmol L −1 day −1 ) (Supplementary Fig. ). The N 2 O consumption rates in co-culture EV between pH 4.5 to 6.0 were similar and ranged from 0.24 ± 0.01 to 0.26 ± 0.01 mmol L −1 day −1 (Supplementary Fig. ). These findings suggest that pyruvate fermentation by Serratia sp., not N 2 O reduction by Desulfosporosinus sp., explains the extended lag periods observed at pH 4.5 (Supplementary Fig. ). Consistently, shorter lag phase for both N 2 O reduction and Desulfosporosinus growth were observed in co-culture EV amended with the amino acid mixture (Fig. ).
Phylogenomic reconstruction based on concatenated alignment of 120 bacterial marker genes corroborated the affiliation of the N 2 O-reducing bacterium with the genus Desulfosporosinus (Fig. ). The genus Desulfosporosinus comprises strictly anaerobic, sulfate-reducing bacteria, and Desulfosporosinus acididurans strain SJ4 and Desulfosporosinus acidiphilus strain M1 were characterized as acidophilic sulfate reducers. Genome analysis revealed shared features between the N 2 O-reducing Desulfosporosinus sp. and characterized Desulfosporosinus spp. (Supplementary Note ). The N 2 O-reducing Desulfosporosinsus sp. in co-culture EV possesses the aprAB and dsrAB genes encoding adenylyl sulfate reductase and dissimilatory sulfate reductase, respectively, but lacks the sat gene encoding sulfate adenylyltransferase/sulfurylase. To provide experimental evidence that the N 2 O-reducing Desulfosporosinus sp. in co-culture EV lacks the ability to reduce sulfate, a hallmark feature of the genus Desulfosporosinus , comparative growth studies were performed. The N 2 O-reducing Desulfosporosinsus sp. in co-culture EV did not grow with sulfate as sole electron acceptor, consistent with an incomplete dissimilatory sulfate reduction pathway (Supplementary Fig. ). Desulfosporosinus acididurans strain D , a close relative of the N 2 O-reducing Desulfosporosinus sp. in co-culture EV, grew with sulfate in pH 5.5 medium, but did not grow with N 2 O as electron acceptor under the same incubation conditions (Supplementary Fig. ). These observations corroborate the genomic analysis that the N 2 O-reducing Desulfosporosinus sp. lacks the ability to perform dissimilatory sulfate reduction. Based on phylogenetic and physiologic features, the N 2 O-reducer in culture EV represents a novel Desulfosporosinus species, for which the name Desulfosporosinus nitrosoreducens strain PR is proposed ( https://seqco.de/i:32619 ).
2 O reduction in Desulfosporosinus nitrosoreducens strain PR The strain PR genome harbors a single nosZ gene affiliated with clade II (Fig. ). Independent branch placement of the strain PR NosZ on the clade II NosZ tree suggests an ancient divergence; a finding supported by NosZ Amino acid Identity (AI) relative to the Average Amino acid Identity (AAI) value of the closest matching NosZ-encoding genome. Specifically, comparisons between the proteins encoded on the genomes of Desulfosporosinus nitrosoreducens strain PR and Desulfosporosinus meridiei showed genus-level AAI relatedness (i.e., AAI 73.83%), which was significantly higher than the AI of the encoded NosZ (i.e., AI 44%), indicating fast evolution of this protein and/or horizontal nosZ acquisition from a distant relative (Figs. and ). The NosZ of Desulfosporosinus nitrosoreducens strain PR is slightly more similar (AI: 45%) to the NosZ of the distant relative Desulfotomaculum ruminis . Comparative analysis of the strain PR nos gene cluster with bacterial and archaeal counterparts corroborated characteristic clade II features, including a Sec translocation system, genes encoding cytochromes and an iron-sulfur protein, and a nosB gene located immediately downstream of nosZ (Fig. ). nosB encodes a transmembrane protein of unknown function and has been found on clade II, but not clade I nos clusters. The nos gene clusters of closely related taxa (e.g., Desulfosporosinus meridiei , Desulfitobacterium dichloroeliminans , Desulfitobacterium hafniense ) show similar organization; however, differences were observed in the nos gene cluster of Desulfosporosinus nitrosoreducens strain PR. Specifically, the genes encoding an iron-sulfur cluster protein and cytochromes precede nosZ in Desulfosporosinus meridiei , but are located downstream of two genes encoding proteins of unknown functions in strain PR (Fig. ). Of note, among the microbes with nos operons and included in the analyses, only Desulfosporosinus nitrosoreducens and Nitratiruptor labii , both with a clade II nos cluster, were experimentally validated to grow with N 2 O below pH 6.
Functional annotation of the Serratia sp. and the Desulfosporosinus nitrosoreducens strain PR genomes was conducted to investigate the interspecies interactions (Fig. ). A btsT gene encoding a specific, high-affinity pyruvate/proton symporter and genes implicated in pyruvate fermentation (i.e., pflAB , poxB ) are present on the Serratia genome, but are missing on the strain PR genome, consistent with the physiological characterization results. f dhC genes encoding a formate transporter are present on both genomes, but only the strain PR genome harbors the fdh gene cluster encoding a formate dehydrogenase complex (Supplementary Fig. ), consistent with the observation that the Serratia sp. excretes formate, which strain PR utilizes as electron donor for N 2 O reduction (Supplementary Fig. ). Gene clusters encoding two different Ni/Fe-type hydrogenases (i.e., hyp and hya gene clusters) (Supplementary Fig. ) and a complete nos gene cluster (Fig. ) are present on the strain PR genome, but not on the Serratia sp. genome. Based on the KEGG and Uni-Prot databases , the Serratia genome contains the biosynthetic pathways (100% completeness) for aspartate, lysine, threonine, tryptophan, isoleucine, serine, leucine, valine, glutamate, arginine, proline, methionine, tyrosine, cysteine, and histidine. In contrast, only aspartate and glutamate biosynthesis are predicted to be complete on the strain PR genome, whereas the completeness level for biosynthetic pathways of other amino acids was below 80%. The Serratia genome encodes a complete set of TCA cycle enzymes, indicating the potential for forming aspartate and glutamate via transamination of oxaloacetate and α-ketoglutarate. In contrast, the strain PR genome lacks genes encoding malate dehydrogenase, citrate synthase, and aconitate hydratase, indicative of an incomplete TCA cycle. Therefore, strain PR is deficient of de novo formation of precursors for glutamate, aspartate, alanine, and related amino acids . A high-affinity amino acid transport system was found on the strain PR genome (Supplementary Fig. ), suggesting this bacterium can efficiently acquire extracellular amino acids to meet its nutritional requirements.
A few studies reported limited N 2 O reduction activity in acidic microcosms, but enrichment cultures for detailed experimentation were not obtained , , . Possible explanations for the observed N 2 O consumption in acidic microcosms include residual activity of existing N 2 O-reducing biomass (i.e., cells synthesized NosZ during growth with N 2 O as respiratory electron acceptor at a permissible pH show NosZ activity at lower pH; however, no synthesis of new NosZ occurs at acidic pH), or the presence of microsites on soil particles where solid phase properties influence local pH, generating pH conditions not captured by bulk aqueous phase pH measurements , , . Soil slurry microcosms providing such microsites with favorable (i.e., higher) pH conditions can give the false impression of low pH N 2 O consumption. Removal of solids during transfers eliminates this niche, exposing microorganisms to bulk phase pH, a plausible explanation for the difficulty establishing N 2 O-reducing transfer cultures under acidic conditions. Our work with acidic tropical soils highlights another crucial issue, specifically the choice of carbon source for the successful transition from microcosms to soil-free enrichment cultures. Lactate sustained N 2 O reduction in pH 4.5 Luquillo tropical soil microcosms, but transfer cultures commenced N 2 O reduction only when pyruvate substituted lactate. Lactate has a higher p K a value than pyruvate (3.8 versus 2.45), indicating that a larger fraction of protonated, and potentially toxic, lactic acid exists at pH 4.5 . As discussed above, in soil microcosms, particles with ion exchange capacity (i.e., microsites) can suppress inhibitory effects of protonated organic acids, a possible explanation why lactate supported N 2 O reduction in the microcosms but not in the enrichment cultures. Fifteen repeated transfers with N 2 O, pyruvate, and H 2 yielded a co-culture comprising a Serratia sp. and a Desulfosporosinus sp. The rapid enrichment of a co-culture was surprising considering that pyruvate and H 2 are substrates for many soil microbes. N 2 O was the sole electron acceptor provided to the defined basal salt medium, with some CO 2 being formed during pyruvate fermentation (Phase I); however, no evidence was obtained for H 2 -driven CO 2 reduction to acetate or to methane. In co-culture EV, the initial dose of N 2 O resulted in an aqueous concentration of 2 mM, substantially higher than the reported inhibitory constants for corrinoid-dependent microbial processes – , and both CO 2 /H 2 reductive acetogenesis and hydrogenotrophic methanogenesis would not be expected to occur in the enrichment cultures, a prediction the analytical measurements support. Available axenic and mixed denitrifying cultures obtained from circumneutral pH soils reduce N 2 O at circumneutral pH, but not under acidic pH conditions , , . Rhodanobacter sp. strain C01, a facultative anaerobe isolated from acidic (pH 3.7) soil was reported to reduce N 2 O at pH 5.7 ; however, growth with N 2 O at pH 5.7 was not demonstrated, and it is possible the observed N 2 O reduction activity occurred at higher pH (Supplementary Fig. ). Characterization of Nitratiruptor labii , a facultative anaerobic, strictly chemolithoautotrophic, halophilic deep-sea vent thermophile with a pH optimum of 6.0, provided some evidence for N 2 O reduction activity at pH 5.4, but not at pH 5.2 . The discovery and cultivation of co-culture EV comprising Desulfosporosinus nitrosoreducens strain PR provides unambiguous evidence that a soil bacterium can grow with N 2 O as electron acceptor at pH 4.5. Interestingly, strain PR reduces N 2 O between pH 4.5 and 6.0, but no N 2 O reduction was observed at or above pH 7. This finding implies that Desulfosporosinus nitrosoreducens cannot be enriched with N 2 O as electron acceptor at or above pH 6.5, suggesting the maintenance of acidic pH conditions during enrichment is crucial for the cultivation of microorganisms capable of low pH N 2 O reduction. Apparently, pH selects for distinct groups of N 2 O reducers, with prior research focused on facultative anaerobic, denitrifying isolates obtained at circumneutral pH. The discovery of Desulfosporosinus nitrosoreducens strain PR lends credibility to the hypothesis that the diverse nosZ genes found in acidic soil metagenomes may indeed be functional. Of note, nosZ genes in acidic soils are often found on the genomes of strict anaerobes , suggesting that diverse anaerobic bacteria capable of low pH N 2 O reduction await discovery. Desulfosporosinus nitrosoreducens strain PR sequences were rare in the soil metagenome suggesting that this bacterium was not abundant at the time of sampling, but low abundance members of a community can drive relevant ecosystem processes . Time series sampling would be needed to reveal the in situ population dynamics. The cultivation of strain PR provides a blueprint for unraveling a largely unknown diversity of low pH N 2 O reducers and exploring the geochemical parameters that govern this process in acidic soils. Desulfosporosinus nitrosoreducens strain PR possesses a clade II nos gene cluster similar to those found in neutrophilic clade II N 2 O reducers without clearly distinguishing features based on gene content and synteny (Fig. ). Experimental work with Paracoccus denitrificans , a model organism harboring a clade I nosZ and used for studying denitrification to N 2 , has led to plausible explanations why acidic pH impairs N 2 O reduction activity . For example, acidic pH may hinder the binding of Cu 2+ to the highly conserved histidine residues in the Cu A and/or Cu Z sites, implying that NosZ from bacteria capable of low pH N 2 O reduction should have altered Cu A and Cu Z sites. Cu A is involved in electron transfer and the CX 2 FCX 3 HXEM motif was 100% conserved (Supplementary Fig. ) , . The Cu Z site lacks a conserved motif but has seven characteristic histidine residues with 100% conservation (Supplementary Fig. ). An alignment of curated NosZ sequences, including NosZ of Desulfosporosinus nitrosoreducens strain PR, revealed that both clade I and clade II NosZ share 100% conservation of Cu A and Cu Z features. NosZ is a periplasmic enzyme with the mode of secretion differing between clade I versus clade II NosZ organisms. Clade II NosZ follow the general secretion route known as the Sec-pathway, which translocates unfolded proteins across the cytoplasmic membrane. In contrast, clade I NosZ are translocated in their folded state via the Twin-arginine pathway (Tat-pathway) . nosB , a gene encoding a transmembrane protein of unknown function, has been exclusively found associated with clade II nos clusters (Fig. ) , . To what extent nos cluster auxiliary gene content and the secretion pathway influence the pH response of NosZ is unclear and warrants further genetic/biochemical studies. Other factors relevant for N 2 O reduction at acidic pH include the organism’s ability to cope with the potential toxicity of protonated organic acids and to maintain pH homeostasis , . The Desulfosporosinus nitrosoreducens strain PR genome harbors multiple genes associated with DNA repair and potassium transport, suggesting this bacterium can respond to pH stress. These observations suggest that organismal adaptations to low pH environments play a role, but future research should explore if specific features of NosZ from acidophiles enable N 2 O reduction activity under acidic conditions. Soils harbor diverse microbial communities with intricate interaction networks that govern soil biogeochemical processes , including N 2 O turnover, and define the functional dynamics of microbiomes , . Interspecies cooperation between bacteria can enhance N 2 O reduction via promoting electron transfer , the provision of essential nutrients (as demonstrated in co-culture EV), or limit N 2 O reduction due to competition for electron donor(s) or metal cofactors (i.e., copper) , . Metabolomic workflows revealed that Serratia sp. strain MF excretes amino acids during growth with pyruvate, which Desulfosporosinus nitrosoreducens strain PR requires to initiate N 2 O reduction, a finding supported by genome functional predictions (i.e., 15 complete amino acid biosynthesis pathways in Serratia sp. strain MF versus only two complete amino acid biosynthesis pathways in strain PR). Interspecies interactions based on amino acid auxotrophies have been implicated in shaping dynamic anaerobic microbial communities, bolster community resilience, and thus promote functional stability . Other microbes can potentially fulfill the nutritional demands of Desulfosporosinus nitrosoreducens , and the observed commensalism between the Serratia sp. and Desulfosporosinus nitrosoreducens strain PR might have developed coincidentally during the enrichment process. Members of the genus Desulfosporosinus have been characterized as strictly anaerobic sulfate reducers with the capacity to grow autotrophically with H 2 , CO 2 , and sulfate, or, in the absence of sulfate, with pyruvate . Most characterized Desulfosporosinus spp. show optimum growth at circumneutral pH (~7) conditions, except for the acidophilic isolates Desulfosporosinus metallidurans, Desulfosporosinus acidiphilus , Desulfosporosinus acididurans , and Desulfosporosinus sp. strain I2, which perform sulfate reduction at pH 4.0, 3.6, 3.8, and 2.6, respectively , – . Among the 10 Desulfosporosinus species with sequenced genomes, only the neutrophilic Desulfosporosinus meridiei (DSM 13257) carries a nos gene cluster , but its ability to reduce N 2 O has not been demonstrated. Desulfosporosinus nitrosoreducens strain PR lacks the hallmark feature of sulfate reduction and is the first acidophilic, strict anaerobic soil bacterium capable of growth with N 2 O as electron acceptor at pH 4.5, but not at or above pH 7. Strain PR couples N 2 O reduction and growth at pH 4.5 with the oxidation of H 2 or formate, and our experimental efforts with co-culture EV could not demonstrate the utilization of other electron donors. The four characterized acidophilic representatives of the genus Desulfosporosinus show considerable versatility, and various organic acids, alcohols, and sugars, in addition to H 2 , support sulfate reduction , , . The utilization of H 2 as electron donor appears to be a shared feature among Desulfosporosinus spp., and two or more gene clusters encoding hydrogenase complexes were found on the available Desulfosporosinus genomes , . Escalating usage of N fertilizers to meet societal demands for agricultural products accelerates N cycling and soil acidification is predicted to increase N 2 O emissions. Liming is commonly employed to ameliorate soil acidity, a practice considered beneficial for curbing N 2 O emissions based on the assumption that microbial N 2 O reduction is favored in circumneutral pH soils , , , . Our findings demonstrate that soil harbors microorganisms (e.g., Desulfosporosinus nitrosoreducens strain PR) that utilize N 2 O as growth-supporting electron acceptor between pH 4.5 and 6.0. Metagenomic surveys have shown that bacteria capable of low pH N 2 O reduction are not limited to acidic tropical soils, and are more broadly distributed in terrestrial ecosystems . Apparently, acidophilic respiratory N 2 O reducers exist in acidic soil and have the potential to mitigate N 2 O emissions. Recent efforts have shown success in substantially reducing N 2 O emissions from circumneutral and acidic field soils treated with organic waste containing the clade II N 2 O-reducer Cloacibacterium sp. CB-01 . The discovery of a naturally occurring acidophilic soil bacterium that couples N 2 O consumption to growth between pH 4.5-6.0 offers new opportunities to tackle the N 2 O emission challenge and develop knowledge-based management strategies to reduce (i.e., control) N 2 O emissions from acidic agricultural soils. Curbing undesirable N 2 O emissions at the field scale would allow farmers to further reduce their greenhouse gas emissions footprint and potentially earn carbon credits.
Soil sampling locations and microcosms Soil samples were collected in August 2018 at the El Verde research station in the El Yunque Natural Forest in Puerto Rico . The measured soil pH was 4.45 and characteristic for the region. Vertical distance of the El Verde research station to mean sea level is 434 meters. Fresh soil materials from 9 to 18 cm depth were used to establish pH 4.5 laboratory microcosms that were amended with N 2 O and lactate . Enrichment process Transfer cultures were established in 160-mL glass serum bottles containing 100 mL of anoxic, completely synthetic, defined basal salt medium prepared with modifications . The mineral medium consisted of (g L −1 ): NaCl (1.0); MgCl 2 •6H 2 O (0.5); KH 2 PO 4 (7.0); NH 4 Cl (0.3); KCl (0.3); CaCl 2 •2H 2 O (0.015); l -cysteine (0.031) or dithiothreitol (0.15). The medium also contained 1 mL of a trace element solution, 1 mL Se/Wo solution, and 0.25 mL resazurin solution (0.1% w/w). The trace element solution contained (mg L −1 ): FeCl 2 •4H 2 O (1,500); CoCl 2 •6H 2 O (190); MnCl 2 •4H 2 O (100); ZnCl 2 (70); H 3 BO 3 (6); Na 2 MoO 4 •2H 2 O (36); CuCl 2 •2H 2 O (2); and 10 mL HCl (25% solution, w/w). The Se/Wo solution consisted of (mg L −1 ): Na 2 SeO 3 •5H 2 O (6); NaWO 4 •2H 2 O (8), and NaOH (500). The serum bottles with N 2 headspace were sealed with butyl rubber stoppers (Bellco Glass, Vineland, NJ, USA) held in place with aluminum crimp caps. Following autoclaving, the measured medium pH ranged between 4.27 to 4.35. All subsequent amendments to the cultivation vessels used sterile plastic syringes and needles to augment the medium with aqueous, filter-sterilized (0.2 µm polyethersulfone membrane filters, Thermo Fisher Scientific, Waltham, MA, USA) stock solutions and undiluted gases . Ten mL of N 2 O gas (416 µmol, 4.16 mM nominal; 99.5%) was added 24 hours prior to inoculation. The bottles were inoculated (1%, v/v) from an El Verde microcosm (established in 160 mL glass serum bottles containing 100 mL of basal salt medium and ∼2 g [wet weight] of soil) showing N 2 O reduction activity . The microcosm was manually shaken before 1 mL aliquots were transferred with a 3-mL plastic syringe and a 2-gauge needle. Initial attempts to obtain solid-free enrichment cultures with 5 mM lactate as carbon source and electron donor showed no N 2 O reduction activity. The following substrates were subsequently tested in the transfer cultures: 5 mM propionate, 20 mM pyruvate, 20 mM pyruvate plus 10 mL (416 µmol, 4.16 mM nominal) hydrogen (H 2 ), 1 mM formate plus 1 mM acetate and 5 mL (208 µmol, 2.08 mM nominal) CO 2 , and 0.1 or 10 g L −1 yeast extract. Subsequent transfers (3%, v/v) used medium supplemented with 0.5 or 2.5 mM pyruvate and 10 mL H 2 , and occurred when the initial dose of 10 mL N 2 O had been consumed. All culture vessels were incubated in upright position at 30 °C in the dark without agitation. Microbial community analysis 16S rRNA gene amplicon sequencing was performed on samples collected from 6 th -generation transfer cultures following complete N 2 O consumption, and 9 th -generation transfer cultures following complete pyruvate consumption (Phase I) and complete N 2 O consumption (Phase II). Cells from 1 mL of culture suspension samples were collected by centrifugation (10,000 x g, 20 min, 4 °C), and genomic DNA was isolated from the pellets using the DNeasy PowerSoil Kit (Qiagen, Hilden, Germany). 16S rRNA gene-based amplicon sequencing was conducted at the University of Tennessee Genomics Core following published procedures . Primer set 341F-785R and primer set 515F-805R were used for amplicon sequencing of DNA extracted from 6 th and 9 th generation transfer cultures, respectively . Analysis of amplicon reads was conducted with nf-core/ampliseq v2.3.1 using Nextflow . Software used in nf-core/ampliseq was containerized with Singularity v3.8.6 . Amplicon read quality was evaluated with FastQC v0.11.9 and primer removal used Cutadapt v3.4 . Quality control including removal of sequences with poor quality, denoising, and chimera removal was performed, and amplicon sequence variants (ASVs) were inferred using DADA2 . Barrnap v0.9 was used to discriminate rRNA sequences as potential contamination . ASVs were taxonomically classified based on the Silva v138.1 database . Relative and absolute abundances of ASVs were calculated using Qiime2 v2021.8.0 . Short-read fragments of the El Verde soil metagenome representing 16S rRNA genes were identified and extracted using Parallel-Meta Suite v3.7 . Isolation efforts Following 15 consecutive transfers, 100 µL cell suspension aliquots were serially diluted in basal salt medium and plated on tryptic soy agar (TSA, MilliporeSigma, Rockville, MD, USA) medium. Colonies with uniform morphology were observed, and a single colony was transferred to a new TSA plate. This process was repeated three times before a single colony was transferred to liquid basal salt medium (pH 4.5) amended with 2.5 mM pyruvate, 416 µmol N 2 O, and 416 µmol H 2 . Following growth, DNA was extracted for PCR amplification with general bacterial 16S rRNA gene-targeted primer pair 8F-1541R (Integrated DNA Technologies, Inc.,[IDT] Coralville, IA, USA), and Sanger sequencing of both strands yielded a 1471-bp long 16S rRNA gene fragment. Efforts to isolate the N 2 O reducer applied the dilution-to-extinction principle . Ten-fold dilution-to-extinction series used 20 mL glass vials containing 9 mL of basal salt medium and 0.8% (w/v) low melting agarose (MP Biomedicals, LLC., Solon, OH) with a gelling temperature below 30° . Each glass vial received 2.5 mM pyruvate, 1 mL (41.6 µmol, 4.16 mM nominal) H 2 and 1 mL (41.6 µmol, 4.16 mM nominal) N 2 O following heat sterilization. Parallel 10 −1 to 10 −10 dilution-to-extinction series were established in liquid basal salt medium without low melting agarose, which were used to inoculate the respective soft agar dilution vials. The same dilution-to-extinction procedure was performed in liquid medium and soft agar dilution vials with the 15-amino acid mixture (Supplementary Table ) substituting pyruvate. Additional attempts to isolate the N 2 O reducer used solidified (1.5% agar, w/v) basal salt medium. A 1-mL sample of a 15 th -generation transfer culture that actively reduced N 2 O was 10-fold serially diluted in liquid basal salt medium, and 100 µL of cell suspension aliquots were evenly distributed on the agar surface. The plates were incubated under an atmosphere of N 2 /H 2 /N 2 O (8/1/1, v/v/v), and colony formation was monitored every 2 weeks over a 6-month period. Following the isolation of the Serratia sp., a two-step approach was tested to isolate the N 2 O reducer. First, the axenic Serratia sp. was grown in defined basal salt medium amended with 2.5 mM pyruvate as the sole substrate. Following complete consumption of pyruvate, the supernatant (i.e., spent medium) was filter-sterilized and transferred to sterile 20 mL glass vials inside an anoxic chamber (N 2 /H 2 , 97/3, v/v) (Coy Laboratory Products, Inc., Grass Lake, MI, USA). The vials received 1 mL H 2 and 1 mL N 2 O, and were inoculated from a 10 −1 to 10 −10 serial dilution series of co-culture EV comprising the pyruvate-fermenting Serratia sp. and the N 2 O-reducing Desulfosporosinus sp. This approach tested if the spent medium contains growth factors (i.e., amino acids) that met the nutritional requirement of the N 2 O-reducing Desulfosporosinus sp., without the need for pyruvate addition and associated growth of the Serratia sp. Based on the observation that the N 2 O-reducing Desulfosporosinus sp. is a spore former (Supplementary Fig. ), co-culture EV bottles that had completely consumed pyruvate and N 2 O were heated to 60 °C or 80 °C for 30 minutes, and cooled to room temperature before serving as inocula (10%, v/v) of fresh medium bottles containing the 15-amino acid mixture, 10 mL H 2 , and 10 mL N 2 O. Quantitative PCR (qPCR) A SYBR Green qPCR assay targeting the 16S rRNA gene of the Serratia sp., and a TaqMan qPCR assay targeting the 16S rRNA gene of the Desulfosporosinus sp. were designed using Geneious Prime (Supplementary Table ). Probe and primer specificities were examined by in silico analysis using the Primer-BLAST tool , and experimentally confirmed using 1538 bp- and 1467 bp-long synthesized linear DNA fragments (IDT) of the respective complete 16S rRNA genes of the Serratia sp. and the Desulfosporosinus sp., respectively. For enumeration of Serratia 16S rRNA genes, 25 µL qPCR tubes received 10 µL 1X Power SYBR Green, 9.88 µL UltraPure nuclease-free water (Invitrogen, Carlsbad, CA, USA), 300 nM of each primer, and 2 µL template DNA. For enumeration of Desulfosporosinus 16S rRNA genes, the qPCR tubes received 10 µL TaqMan Universal PCR Master Mix (Life Technologies, Carlsbad, CA, USA), 300 nM of TaqMan probe (5’−6FAM-AAGCTGTGAAGTGGAGCCAATC-MGB-3’) (Thermo Fisher Scientific), 300 nM of each primer, and 2 µL template DNA . All qPCR assays were performed using an Applied Biosystems ViiA 7 system (Applied Biosystems, Waltham, MA, USA) with the following amplification conditions: 2 min at 50 °C and 10 min at 95 °C, followed by 40 cycles of 15 sec at 95 °C and 1 min at 60 °C. The standard curves were generated using 10-fold serial dilutions of the linear DNA fragments carrying a complete sequence of the Serratia sp. (1,538 bp) or the Desulfosporosinus sp. (1467 bp) 16S rRNA gene, covering the 70- and 72-bp qPCR target regions, respectively. The qPCR standard curves established with the linear DNA fragments carrying complete Serratia sp. or Desulfosporosinus sp. 16S rRNA genes had slopes of −3.82 and −3.404, y-intercepts of 37.408 and 34.181, R 2 values of 0.999 and 1, and qPCR amplification efficiencies of 82.7% and 96.7%, respectively. The linear range spanned 1.09 to 1.09 ×10 8 gene copies per reaction with a calculated detection limit of 10.9 gene copies per reaction. The genome analysis revealed single copy 16S rRNA genes on both the Serratia sp. and the Desulfosporosinus sp. genomes, indicating that the enumeration of 16S rRNA gene estimates cell abundances. The 16S rRNA gene sequences of the Serratia sp. and the Desulfosporosinus sp. are available under NCBI accession numbers OR076433 and OR076434, respectively. Nutritional interactions in the co-culture To explore the nutritional requirements of the Desulfosporosinus sp., a time series metabolome analysis of culture supernatant was conducted. Briefly, the axenic Serratia sp. culture was grown in basal salt medium amended with 2.5 mM pyruvate, 4.16 mM (nominal) H 2 , and 4.16 mM (nominal) N 2 O. Following a 7-day incubation period, during which pyruvate was completely consumed, the bottles received 1% (v/v) co-culture EV inoculum from a 15 th transfer culture. Cell suspension aliquots (1.5 mL) were collected and centrifuged, and the resulting cell-free supernatants were transferred to 2 mL plastic tubes and immediately stored at –80 °C for metabolome analysis. Additional samples assessed the metabolome associated with supernatant of axenic Serratia sp. cultures that received 1 mM DTT instead of 0.2 mM l -cysteine as reductant. The results of the metabolome analysis guided additional growth experiments with amino acid mixtures replacing pyruvate. The 100-fold concentrated aqueous 15-amino acid stock solution contained (g L −1 ): alanine (0.5); aspartate (1); proline (1); tyrosine (0.3); histidine (0.3); tryptophan (0.2); arginine (0.5); isoleucine (0.5); methionine (0.4); glycine (0.3); threonine (0.5); valine (0.9); lysine (1); glutamate (1); serine (0.8). The stock solution was filter-sterilized and stored in the dark at room temperature. Growth of co-culture EV in medium amended with the 15-amino acid mixture increased the pH by no more than 0.3 pH units to a maximum observed pH of 4.6. Metagenome sequencing DNA was isolated from the axenic Serratia sp. culture grown with 2.5 mM pyruvate, and the N 2 O-reducing 15 th generation co-culture EV grown on H 2 , N 2 O, and the amino acid mixture. Metagenome sequencing was performed at the University of Tennessee Genomics Core using the Illumina NovaSeq 6000 platform. Shotgun sequencing generated a total of 494 and 387 Gbp of raw sequences from the axenic Serratia sp. culture and co-culture EV. Metagenomic short-reads were processed using the nf-core/mag pipeline v2.1.0 . Short-read quality was evaluated with FastQC v0.11.9, followed by quality filtering and Illumina adapter removal using fastp v0.20.1 . Short-reads mapped to the PhiX genome (GCA_002596845.1, ASM259684v1) with Bowtie2 v2.4.2 were removed . Assembly of processed short-reads used Megahit2 v1.2.9 . Binning of assembled contigs was conducted with MetaBAT2 v2.15 , and metagenome-assembled genomes that passed CheckM were selected for further analysis. Protein-coding sequences on both genomes were predicted using MetaGeneMark-2 and functional annotation used Blastp against the Swiss-Prot database , KEGG and the RAST server . Amino acid biosynthesis completeness was evaluated using KofamKOALA . Metagenomic datasets of El Verde soil and a 15 th transfer culture were searched against the Desulfosporosinus nitrosoreducens strain PR genome using blastn . The best hits were extracted using an in-house script embedded in the Enveomics collection . A graphical representation of short-reads recruited to the Desulfosporosinus nitrosoreducens strain PR genome was generated with BlasTab.recplot2.R. The coverage evenness was assessed based on distribution of high nucleotide identity reads across the reference genome sequences. Nonpareil v3.4.1 using the weighted NL2SOL algorithm was used to estimate the average coverage level of the metagenomic datasets . Metagenome data of the original El Verde soil was downloaded from the European Nucleotide Archive (accession number PRJEB74473). Metagenomic datasets of co-culture EV and the genome of the axenic Serratia culture were deposited at NCBI under accession numbers SRR24709127 and SRR24709126, respectively (Supplementary Table ). Comparative analysis of nos gene clusters Available genomes of select N 2 O reducers were downloaded from NCBI (Supplementary Table ). Functional annotation of the genomes was conducted using the RAST server. Transmembrane topology of the protein encoded by nosB , a gene located immediately adjacent to clade II nosZ was verified using DeepTMHMM . Accessory genes associated with the Desulfosporosinus nitrosoreducens nosZ were identified using cblaster to perform a gene-cluster level BLAST analysis against Desulfosporosinus , Desulfitobacterium , and Anaeromyxobacter genomes. The nos gene clusters were visualized using the gggenes package ( https://wilkox.org/gggenes/index.html ). Phylogenomic analysis Phylogenomic reconstruction was performed with genomes of the Desulfitobacteriaceae family available in the NCBI database (Supplementary Table ). Conserved marker genes of the 20 genomes were identified and aligned with GTDB-TK . Phylogenetic relationships were inferred based on the alignment of 120 concatenated bacterial marker genes using RAxML-NG with 1000 bootstrap replicates. A best fit evolutionary model was selected based on the result of Modeltest-NG . Calculation of Average Amino acid Identity (AAI) and hierarchical clustering of taxa based on AAI values were conducted with EzAAI . Tree annotation and visualization were performed with the ggtree package . NosZ phylogenetic analysis NosZ reference sequences were downloaded from pre-compiled models in ROCker . The NosZ sequence of Desulfosporosinus nitrosoreducens strain PR was aligned to the NosZ reference sequences using MAFFT , and a maximum likelihood tree was created with RAxML-NG based on the best model from Modeltest-NG. The inferred tree and Amino acid Identity (AI) between Desulfosporosinus nitrosoreducens strain PR, Desulfosporosinus meridiei and the NosZ reference sequences were visualized using the ggtree package. Metabolome analysis Cell-free samples were prepared . Briefly, 1.5 mL of 0.1 M formic acid in 4:4:2 (v:v:v) acetonitrile:water:methanol was added to 100 µL aliquots of supernatant samples. The tubes were shaken at 4 °C for 20 minutes and centrifuged at 16,200 x g for 5 minutes. The supernatant was collected and dried under a steady stream of N 2 . The dried extracts were suspended in 300 µL of water prior to analysis. For water soluble metabolites, the mass analysis was performed in untargeted mode . The chromatographic separations utilized a Synergi 2.6 µm Hydro RP column (100 Å, 100 mm × 2.1 mm; Phenomenex, Torrance, CA) with tributylamine as an ion pairing reagent, an UltiMate 3000 binary pump (Thermo Fisher Scientific), and previously described elution conditions . The mass analysis was carried out using an Exactive Plus Orbitrap MS (Thermo Fisher Scientific) using negative electrospray ionization and full scan mode. Following the analysis, metabolites were identified using exact masses and retention times, and the areas under the curves (AUC) for each chromatographic peak were integrated using the open-source software package Metabolomic Analysis and Visualization Engine , . Dynamic changes of metabolites over time were assessed by comparative analysis of AUC values. Phenotypic characterization of co-culture EV To test for autotrophic growth of co-culture EV, pyruvate was replaced by 5 mL (2.08 mM nominal) of CO 2 (99.5% purity). All experiments used triplicate cultures, and serum bottles without pyruvate, without H 2 , without N 2 O, or without inoculum served as controls. Growth experiments were conducted to determine the responses of the Serratia sp. and the Desulfosporosinus sp. to pH. Desired medium pH values of 4.5, 5, 6, 7 and 8 were achieved by adjusting the mixing ratios of KH 2 PO 4 and K 2 HPO 4 . To achieve pH 3.5, the pH 4.5 medium was adjusted with 5 M hydrochloric acid. Replicate incubation vessels received 10 mL (4.16 mM nominal) N 2 O and 10 mL (4.16 mM nominal) H 2 , and 2.5 mM pyruvate, following an overnight equilibration period, 1% (v:v) inocula from the axenic Serratia sp. culture or the N 2 O-reducing co-culture EV, both pregrown in pH 4.5 medium. The replicate cultures inoculated with the Serratia sp. were incubated for 14 days, after which three vessels received an inoculum of co-culture EV (1%) to initiate N 2 O consumption. Three Serratia sp. cultures not receiving a co-culture EV inoculum served as controls. Consumption rates of pyruvate and N 2 O were calculated based on data points representing linear ranges of consumption according to 1 [12pt]{minimal}
$$V=_{1}-{T}_{0}}$$ V = N T 1 − T 0 where V represent the consumption rate; N represent the initial amounts of pyruvate or N 2 O. T 1 refers to timepoints when pyruvate or N 2 O were completely consumed. T 0 for pyruvate consumption refers to day zero (i.e., after inoculation with the axenic Serratia sp.). T 0 for N 2 O consumption refers to day 14 following inoculation with co-culture EV, which resulted in a linear decrease of N 2 O. Analytical procedures N 2 O, CO 2 , and H 2 were analyzed by manually injecting 100 µL headspace samples into an Agilent 3000 A Micro-Gas Chromatograph (Palo Alto, CA, USA) equipped with Plot Q and molecular sieve columns coupled with a thermal conductivity detector . Aqueous concentrations (µM) were calculated from the headspace partial pressures based on reported Henry’s law constants for N 2 O (2.4 × 10 −4 ), H 2 (7.8 × 10 −6 ) and CO 2 (3.3 × 10 −4 ) mol (m 3 Pa) −1 according to 2 [12pt]{minimal}
$${H}^{{cp}}{RT}=_{a}}{{C}_{g}}$$ H c p R T = C a C g Where H cp is the Henry’s law constant , R is the universal gas constant, T is the temperature, C g is the headspace gas-phase concentration, and C aq is the liquid phase (dissolved) concentration. Five-point standard curves for N 2 O, CO 2 and H 2 spanned concentration ranges of 8333 to 133,333 ppmv. Pyruvate, acetate and formate were measured with an Agilent 1200 Series high-performance liquid chromatography (HPLC) system (Palo Alto, CA, USA) . pH was measured in 0.4 mL samples of culture supernatant following removal of cells by centrifugation with a calibrated pH electrode. Etymology Desulfosporosinus nitrosoreducens (ni.troso.re.du’cens. nitroso, nitrous oxide (N 2 O), an oxide of nitrogen and intermediate of nitrogen cycling; L. pres. part. reducencs, reducing; from L. v. reduco, reduce, convert to a different condition; N.L. part. adj. nitrosoreducens, reducing N 2 O). Reporting summary Further information on research design is available in the linked to this article.
Soil samples were collected in August 2018 at the El Verde research station in the El Yunque Natural Forest in Puerto Rico . The measured soil pH was 4.45 and characteristic for the region. Vertical distance of the El Verde research station to mean sea level is 434 meters. Fresh soil materials from 9 to 18 cm depth were used to establish pH 4.5 laboratory microcosms that were amended with N 2 O and lactate .
Transfer cultures were established in 160-mL glass serum bottles containing 100 mL of anoxic, completely synthetic, defined basal salt medium prepared with modifications . The mineral medium consisted of (g L −1 ): NaCl (1.0); MgCl 2 •6H 2 O (0.5); KH 2 PO 4 (7.0); NH 4 Cl (0.3); KCl (0.3); CaCl 2 •2H 2 O (0.015); l -cysteine (0.031) or dithiothreitol (0.15). The medium also contained 1 mL of a trace element solution, 1 mL Se/Wo solution, and 0.25 mL resazurin solution (0.1% w/w). The trace element solution contained (mg L −1 ): FeCl 2 •4H 2 O (1,500); CoCl 2 •6H 2 O (190); MnCl 2 •4H 2 O (100); ZnCl 2 (70); H 3 BO 3 (6); Na 2 MoO 4 •2H 2 O (36); CuCl 2 •2H 2 O (2); and 10 mL HCl (25% solution, w/w). The Se/Wo solution consisted of (mg L −1 ): Na 2 SeO 3 •5H 2 O (6); NaWO 4 •2H 2 O (8), and NaOH (500). The serum bottles with N 2 headspace were sealed with butyl rubber stoppers (Bellco Glass, Vineland, NJ, USA) held in place with aluminum crimp caps. Following autoclaving, the measured medium pH ranged between 4.27 to 4.35. All subsequent amendments to the cultivation vessels used sterile plastic syringes and needles to augment the medium with aqueous, filter-sterilized (0.2 µm polyethersulfone membrane filters, Thermo Fisher Scientific, Waltham, MA, USA) stock solutions and undiluted gases . Ten mL of N 2 O gas (416 µmol, 4.16 mM nominal; 99.5%) was added 24 hours prior to inoculation. The bottles were inoculated (1%, v/v) from an El Verde microcosm (established in 160 mL glass serum bottles containing 100 mL of basal salt medium and ∼2 g [wet weight] of soil) showing N 2 O reduction activity . The microcosm was manually shaken before 1 mL aliquots were transferred with a 3-mL plastic syringe and a 2-gauge needle. Initial attempts to obtain solid-free enrichment cultures with 5 mM lactate as carbon source and electron donor showed no N 2 O reduction activity. The following substrates were subsequently tested in the transfer cultures: 5 mM propionate, 20 mM pyruvate, 20 mM pyruvate plus 10 mL (416 µmol, 4.16 mM nominal) hydrogen (H 2 ), 1 mM formate plus 1 mM acetate and 5 mL (208 µmol, 2.08 mM nominal) CO 2 , and 0.1 or 10 g L −1 yeast extract. Subsequent transfers (3%, v/v) used medium supplemented with 0.5 or 2.5 mM pyruvate and 10 mL H 2 , and occurred when the initial dose of 10 mL N 2 O had been consumed. All culture vessels were incubated in upright position at 30 °C in the dark without agitation.
16S rRNA gene amplicon sequencing was performed on samples collected from 6 th -generation transfer cultures following complete N 2 O consumption, and 9 th -generation transfer cultures following complete pyruvate consumption (Phase I) and complete N 2 O consumption (Phase II). Cells from 1 mL of culture suspension samples were collected by centrifugation (10,000 x g, 20 min, 4 °C), and genomic DNA was isolated from the pellets using the DNeasy PowerSoil Kit (Qiagen, Hilden, Germany). 16S rRNA gene-based amplicon sequencing was conducted at the University of Tennessee Genomics Core following published procedures . Primer set 341F-785R and primer set 515F-805R were used for amplicon sequencing of DNA extracted from 6 th and 9 th generation transfer cultures, respectively . Analysis of amplicon reads was conducted with nf-core/ampliseq v2.3.1 using Nextflow . Software used in nf-core/ampliseq was containerized with Singularity v3.8.6 . Amplicon read quality was evaluated with FastQC v0.11.9 and primer removal used Cutadapt v3.4 . Quality control including removal of sequences with poor quality, denoising, and chimera removal was performed, and amplicon sequence variants (ASVs) were inferred using DADA2 . Barrnap v0.9 was used to discriminate rRNA sequences as potential contamination . ASVs were taxonomically classified based on the Silva v138.1 database . Relative and absolute abundances of ASVs were calculated using Qiime2 v2021.8.0 . Short-read fragments of the El Verde soil metagenome representing 16S rRNA genes were identified and extracted using Parallel-Meta Suite v3.7 .
Following 15 consecutive transfers, 100 µL cell suspension aliquots were serially diluted in basal salt medium and plated on tryptic soy agar (TSA, MilliporeSigma, Rockville, MD, USA) medium. Colonies with uniform morphology were observed, and a single colony was transferred to a new TSA plate. This process was repeated three times before a single colony was transferred to liquid basal salt medium (pH 4.5) amended with 2.5 mM pyruvate, 416 µmol N 2 O, and 416 µmol H 2 . Following growth, DNA was extracted for PCR amplification with general bacterial 16S rRNA gene-targeted primer pair 8F-1541R (Integrated DNA Technologies, Inc.,[IDT] Coralville, IA, USA), and Sanger sequencing of both strands yielded a 1471-bp long 16S rRNA gene fragment. Efforts to isolate the N 2 O reducer applied the dilution-to-extinction principle . Ten-fold dilution-to-extinction series used 20 mL glass vials containing 9 mL of basal salt medium and 0.8% (w/v) low melting agarose (MP Biomedicals, LLC., Solon, OH) with a gelling temperature below 30° . Each glass vial received 2.5 mM pyruvate, 1 mL (41.6 µmol, 4.16 mM nominal) H 2 and 1 mL (41.6 µmol, 4.16 mM nominal) N 2 O following heat sterilization. Parallel 10 −1 to 10 −10 dilution-to-extinction series were established in liquid basal salt medium without low melting agarose, which were used to inoculate the respective soft agar dilution vials. The same dilution-to-extinction procedure was performed in liquid medium and soft agar dilution vials with the 15-amino acid mixture (Supplementary Table ) substituting pyruvate. Additional attempts to isolate the N 2 O reducer used solidified (1.5% agar, w/v) basal salt medium. A 1-mL sample of a 15 th -generation transfer culture that actively reduced N 2 O was 10-fold serially diluted in liquid basal salt medium, and 100 µL of cell suspension aliquots were evenly distributed on the agar surface. The plates were incubated under an atmosphere of N 2 /H 2 /N 2 O (8/1/1, v/v/v), and colony formation was monitored every 2 weeks over a 6-month period. Following the isolation of the Serratia sp., a two-step approach was tested to isolate the N 2 O reducer. First, the axenic Serratia sp. was grown in defined basal salt medium amended with 2.5 mM pyruvate as the sole substrate. Following complete consumption of pyruvate, the supernatant (i.e., spent medium) was filter-sterilized and transferred to sterile 20 mL glass vials inside an anoxic chamber (N 2 /H 2 , 97/3, v/v) (Coy Laboratory Products, Inc., Grass Lake, MI, USA). The vials received 1 mL H 2 and 1 mL N 2 O, and were inoculated from a 10 −1 to 10 −10 serial dilution series of co-culture EV comprising the pyruvate-fermenting Serratia sp. and the N 2 O-reducing Desulfosporosinus sp. This approach tested if the spent medium contains growth factors (i.e., amino acids) that met the nutritional requirement of the N 2 O-reducing Desulfosporosinus sp., without the need for pyruvate addition and associated growth of the Serratia sp. Based on the observation that the N 2 O-reducing Desulfosporosinus sp. is a spore former (Supplementary Fig. ), co-culture EV bottles that had completely consumed pyruvate and N 2 O were heated to 60 °C or 80 °C for 30 minutes, and cooled to room temperature before serving as inocula (10%, v/v) of fresh medium bottles containing the 15-amino acid mixture, 10 mL H 2 , and 10 mL N 2 O.
A SYBR Green qPCR assay targeting the 16S rRNA gene of the Serratia sp., and a TaqMan qPCR assay targeting the 16S rRNA gene of the Desulfosporosinus sp. were designed using Geneious Prime (Supplementary Table ). Probe and primer specificities were examined by in silico analysis using the Primer-BLAST tool , and experimentally confirmed using 1538 bp- and 1467 bp-long synthesized linear DNA fragments (IDT) of the respective complete 16S rRNA genes of the Serratia sp. and the Desulfosporosinus sp., respectively. For enumeration of Serratia 16S rRNA genes, 25 µL qPCR tubes received 10 µL 1X Power SYBR Green, 9.88 µL UltraPure nuclease-free water (Invitrogen, Carlsbad, CA, USA), 300 nM of each primer, and 2 µL template DNA. For enumeration of Desulfosporosinus 16S rRNA genes, the qPCR tubes received 10 µL TaqMan Universal PCR Master Mix (Life Technologies, Carlsbad, CA, USA), 300 nM of TaqMan probe (5’−6FAM-AAGCTGTGAAGTGGAGCCAATC-MGB-3’) (Thermo Fisher Scientific), 300 nM of each primer, and 2 µL template DNA . All qPCR assays were performed using an Applied Biosystems ViiA 7 system (Applied Biosystems, Waltham, MA, USA) with the following amplification conditions: 2 min at 50 °C and 10 min at 95 °C, followed by 40 cycles of 15 sec at 95 °C and 1 min at 60 °C. The standard curves were generated using 10-fold serial dilutions of the linear DNA fragments carrying a complete sequence of the Serratia sp. (1,538 bp) or the Desulfosporosinus sp. (1467 bp) 16S rRNA gene, covering the 70- and 72-bp qPCR target regions, respectively. The qPCR standard curves established with the linear DNA fragments carrying complete Serratia sp. or Desulfosporosinus sp. 16S rRNA genes had slopes of −3.82 and −3.404, y-intercepts of 37.408 and 34.181, R 2 values of 0.999 and 1, and qPCR amplification efficiencies of 82.7% and 96.7%, respectively. The linear range spanned 1.09 to 1.09 ×10 8 gene copies per reaction with a calculated detection limit of 10.9 gene copies per reaction. The genome analysis revealed single copy 16S rRNA genes on both the Serratia sp. and the Desulfosporosinus sp. genomes, indicating that the enumeration of 16S rRNA gene estimates cell abundances. The 16S rRNA gene sequences of the Serratia sp. and the Desulfosporosinus sp. are available under NCBI accession numbers OR076433 and OR076434, respectively.
To explore the nutritional requirements of the Desulfosporosinus sp., a time series metabolome analysis of culture supernatant was conducted. Briefly, the axenic Serratia sp. culture was grown in basal salt medium amended with 2.5 mM pyruvate, 4.16 mM (nominal) H 2 , and 4.16 mM (nominal) N 2 O. Following a 7-day incubation period, during which pyruvate was completely consumed, the bottles received 1% (v/v) co-culture EV inoculum from a 15 th transfer culture. Cell suspension aliquots (1.5 mL) were collected and centrifuged, and the resulting cell-free supernatants were transferred to 2 mL plastic tubes and immediately stored at –80 °C for metabolome analysis. Additional samples assessed the metabolome associated with supernatant of axenic Serratia sp. cultures that received 1 mM DTT instead of 0.2 mM l -cysteine as reductant. The results of the metabolome analysis guided additional growth experiments with amino acid mixtures replacing pyruvate. The 100-fold concentrated aqueous 15-amino acid stock solution contained (g L −1 ): alanine (0.5); aspartate (1); proline (1); tyrosine (0.3); histidine (0.3); tryptophan (0.2); arginine (0.5); isoleucine (0.5); methionine (0.4); glycine (0.3); threonine (0.5); valine (0.9); lysine (1); glutamate (1); serine (0.8). The stock solution was filter-sterilized and stored in the dark at room temperature. Growth of co-culture EV in medium amended with the 15-amino acid mixture increased the pH by no more than 0.3 pH units to a maximum observed pH of 4.6.
DNA was isolated from the axenic Serratia sp. culture grown with 2.5 mM pyruvate, and the N 2 O-reducing 15 th generation co-culture EV grown on H 2 , N 2 O, and the amino acid mixture. Metagenome sequencing was performed at the University of Tennessee Genomics Core using the Illumina NovaSeq 6000 platform. Shotgun sequencing generated a total of 494 and 387 Gbp of raw sequences from the axenic Serratia sp. culture and co-culture EV. Metagenomic short-reads were processed using the nf-core/mag pipeline v2.1.0 . Short-read quality was evaluated with FastQC v0.11.9, followed by quality filtering and Illumina adapter removal using fastp v0.20.1 . Short-reads mapped to the PhiX genome (GCA_002596845.1, ASM259684v1) with Bowtie2 v2.4.2 were removed . Assembly of processed short-reads used Megahit2 v1.2.9 . Binning of assembled contigs was conducted with MetaBAT2 v2.15 , and metagenome-assembled genomes that passed CheckM were selected for further analysis. Protein-coding sequences on both genomes were predicted using MetaGeneMark-2 and functional annotation used Blastp against the Swiss-Prot database , KEGG and the RAST server . Amino acid biosynthesis completeness was evaluated using KofamKOALA . Metagenomic datasets of El Verde soil and a 15 th transfer culture were searched against the Desulfosporosinus nitrosoreducens strain PR genome using blastn . The best hits were extracted using an in-house script embedded in the Enveomics collection . A graphical representation of short-reads recruited to the Desulfosporosinus nitrosoreducens strain PR genome was generated with BlasTab.recplot2.R. The coverage evenness was assessed based on distribution of high nucleotide identity reads across the reference genome sequences. Nonpareil v3.4.1 using the weighted NL2SOL algorithm was used to estimate the average coverage level of the metagenomic datasets . Metagenome data of the original El Verde soil was downloaded from the European Nucleotide Archive (accession number PRJEB74473). Metagenomic datasets of co-culture EV and the genome of the axenic Serratia culture were deposited at NCBI under accession numbers SRR24709127 and SRR24709126, respectively (Supplementary Table ).
nos gene clusters Available genomes of select N 2 O reducers were downloaded from NCBI (Supplementary Table ). Functional annotation of the genomes was conducted using the RAST server. Transmembrane topology of the protein encoded by nosB , a gene located immediately adjacent to clade II nosZ was verified using DeepTMHMM . Accessory genes associated with the Desulfosporosinus nitrosoreducens nosZ were identified using cblaster to perform a gene-cluster level BLAST analysis against Desulfosporosinus , Desulfitobacterium , and Anaeromyxobacter genomes. The nos gene clusters were visualized using the gggenes package ( https://wilkox.org/gggenes/index.html ).
Phylogenomic reconstruction was performed with genomes of the Desulfitobacteriaceae family available in the NCBI database (Supplementary Table ). Conserved marker genes of the 20 genomes were identified and aligned with GTDB-TK . Phylogenetic relationships were inferred based on the alignment of 120 concatenated bacterial marker genes using RAxML-NG with 1000 bootstrap replicates. A best fit evolutionary model was selected based on the result of Modeltest-NG . Calculation of Average Amino acid Identity (AAI) and hierarchical clustering of taxa based on AAI values were conducted with EzAAI . Tree annotation and visualization were performed with the ggtree package .
NosZ reference sequences were downloaded from pre-compiled models in ROCker . The NosZ sequence of Desulfosporosinus nitrosoreducens strain PR was aligned to the NosZ reference sequences using MAFFT , and a maximum likelihood tree was created with RAxML-NG based on the best model from Modeltest-NG. The inferred tree and Amino acid Identity (AI) between Desulfosporosinus nitrosoreducens strain PR, Desulfosporosinus meridiei and the NosZ reference sequences were visualized using the ggtree package.
Cell-free samples were prepared . Briefly, 1.5 mL of 0.1 M formic acid in 4:4:2 (v:v:v) acetonitrile:water:methanol was added to 100 µL aliquots of supernatant samples. The tubes were shaken at 4 °C for 20 minutes and centrifuged at 16,200 x g for 5 minutes. The supernatant was collected and dried under a steady stream of N 2 . The dried extracts were suspended in 300 µL of water prior to analysis. For water soluble metabolites, the mass analysis was performed in untargeted mode . The chromatographic separations utilized a Synergi 2.6 µm Hydro RP column (100 Å, 100 mm × 2.1 mm; Phenomenex, Torrance, CA) with tributylamine as an ion pairing reagent, an UltiMate 3000 binary pump (Thermo Fisher Scientific), and previously described elution conditions . The mass analysis was carried out using an Exactive Plus Orbitrap MS (Thermo Fisher Scientific) using negative electrospray ionization and full scan mode. Following the analysis, metabolites were identified using exact masses and retention times, and the areas under the curves (AUC) for each chromatographic peak were integrated using the open-source software package Metabolomic Analysis and Visualization Engine , . Dynamic changes of metabolites over time were assessed by comparative analysis of AUC values.
To test for autotrophic growth of co-culture EV, pyruvate was replaced by 5 mL (2.08 mM nominal) of CO 2 (99.5% purity). All experiments used triplicate cultures, and serum bottles without pyruvate, without H 2 , without N 2 O, or without inoculum served as controls. Growth experiments were conducted to determine the responses of the Serratia sp. and the Desulfosporosinus sp. to pH. Desired medium pH values of 4.5, 5, 6, 7 and 8 were achieved by adjusting the mixing ratios of KH 2 PO 4 and K 2 HPO 4 . To achieve pH 3.5, the pH 4.5 medium was adjusted with 5 M hydrochloric acid. Replicate incubation vessels received 10 mL (4.16 mM nominal) N 2 O and 10 mL (4.16 mM nominal) H 2 , and 2.5 mM pyruvate, following an overnight equilibration period, 1% (v:v) inocula from the axenic Serratia sp. culture or the N 2 O-reducing co-culture EV, both pregrown in pH 4.5 medium. The replicate cultures inoculated with the Serratia sp. were incubated for 14 days, after which three vessels received an inoculum of co-culture EV (1%) to initiate N 2 O consumption. Three Serratia sp. cultures not receiving a co-culture EV inoculum served as controls. Consumption rates of pyruvate and N 2 O were calculated based on data points representing linear ranges of consumption according to 1 [12pt]{minimal}
$$V=_{1}-{T}_{0}}$$ V = N T 1 − T 0 where V represent the consumption rate; N represent the initial amounts of pyruvate or N 2 O. T 1 refers to timepoints when pyruvate or N 2 O were completely consumed. T 0 for pyruvate consumption refers to day zero (i.e., after inoculation with the axenic Serratia sp.). T 0 for N 2 O consumption refers to day 14 following inoculation with co-culture EV, which resulted in a linear decrease of N 2 O.
N 2 O, CO 2 , and H 2 were analyzed by manually injecting 100 µL headspace samples into an Agilent 3000 A Micro-Gas Chromatograph (Palo Alto, CA, USA) equipped with Plot Q and molecular sieve columns coupled with a thermal conductivity detector . Aqueous concentrations (µM) were calculated from the headspace partial pressures based on reported Henry’s law constants for N 2 O (2.4 × 10 −4 ), H 2 (7.8 × 10 −6 ) and CO 2 (3.3 × 10 −4 ) mol (m 3 Pa) −1 according to 2 [12pt]{minimal}
$${H}^{{cp}}{RT}=_{a}}{{C}_{g}}$$ H c p R T = C a C g Where H cp is the Henry’s law constant , R is the universal gas constant, T is the temperature, C g is the headspace gas-phase concentration, and C aq is the liquid phase (dissolved) concentration. Five-point standard curves for N 2 O, CO 2 and H 2 spanned concentration ranges of 8333 to 133,333 ppmv. Pyruvate, acetate and formate were measured with an Agilent 1200 Series high-performance liquid chromatography (HPLC) system (Palo Alto, CA, USA) . pH was measured in 0.4 mL samples of culture supernatant following removal of cells by centrifugation with a calibrated pH electrode.
Desulfosporosinus nitrosoreducens (ni.troso.re.du’cens. nitroso, nitrous oxide (N 2 O), an oxide of nitrogen and intermediate of nitrogen cycling; L. pres. part. reducencs, reducing; from L. v. reduco, reduce, convert to a different condition; N.L. part. adj. nitrosoreducens, reducing N 2 O).
Further information on research design is available in the linked to this article.
Supplementary Information Peer Review File Description of Additional Supplementary Files Supplementary Dataset 1 Reporting Summary
|
Response of Soil Microbiota, Enzymes, and Plants to the Fungicide Azoxystrobin | 02819eaf-e9ef-4866-b930-73d7225be67b | 11311602 | Microbiology[mh] | Pesticides underpin the maintenance of plant quality and health through their key role in eradicating diseases, pests, and weeds . Azoxystrobin is a fungicide from the group of strobilurins, extensively used in agricultural production due to its broad spectrum of effects and high efficacy against fungal pathogens of crops . Strobilurin was originally isolated from the fungus Strobilurus tenacellus in 1977, whereas azoxystrobin was introduced on the German market in 1996 . Edwards et al. have pointed out that the half-life of azoxystrobin in soil spans from 14 days to 6 months, which is related to the activity of soil microorganisms and enzymes. In addition, azoxystrobin is a fungicide from the group of external quinone inhibitors that inhibit mitochondrial respiration by blocking the transfer of electrons in the cytochrome bc1 complex. Moreover, it inhibits the oxidation of nicotinamide adenine dinucleotide (NADH) and adenosine triphosphate (ATP) and exerts multifaceted effects on Ascomycetes , Basidiomycetes , and Oomycetes fungi. Its EC 50 (half-maximal effective concentration) was reported to range from 0.003 to 0.031 µg mm −3 of the liquid culture medium against Cercospora zeae-maydis and from 0.12 to 297.22 µg mm −3 of the liquid culture medium against Aspergillus flavus . Azoxystrobin has often been detected in different ecosystems at higher than acceptable concentrations and therefore may pose a severe threat to organisms found therein . Pesticides are effective in protecting plants and boosting their yield. However, when used in non-observance of good agricultural practice, they may elicit adverse effects on animal health, as well as water and food quality . Improper and long-term use of fungicides can lead to changes in soil ecosystems as they disturb the abundance, activity, and functioning of the soil microbiota, and in biogeochemical cycles of nitrogen, carbon, phosphorus, and sulfur. This, in turn, may deteriorate the quality and fertility of the soil, which plays a huge role in the environment, by, for instance, providing many nutrients to organisms, increasing plant production, or maintaining biodiversity of the environment . Soil organisms contribute to its proper functioning by influencing its physicochemical and biological properties, which ultimately affect crop productivity. Soil microorganisms are essential for maintaining proper ecological balance, soil fertility, plant growth, and pesticide degradation . By secreting various types of enzymes, lipids, and other biologically active macromolecules, they can affect the fate of pesticides in the soil environment . Therefore, the assessment of the impact of pesticides, including fungicides, on soil microbiota and soil biochemical processes is a sound action taken to maintain the sustainable development of soil ecosystems . Fungicides and their metabolites can be a major stressor for soil microorganisms, which can lead to both a reduction in their diversity and function in the soil, ultimately affecting ecological functionality . Continuous use of fungicides also has detrimental effects on microbial metabolism, soil nutrient cycling, and plant function. Thus, excessive use of these chemicals can lead to high accumulation and prolonged persistence in the soil system, which adversely affects the soil environment . An example is the study by Verdenelli et al. , who noted a significant reduction in the abundance and diversity of gram-positive and gram-negative bacteria and arbuscular fungi under the influence of carbendazim and iprodione applied at the highest dose (4.50 mg kg −1 d.m. soil and 8.30 mg kg −1 d.m. soil, respectively). Another example is difenoconazole introduced into the soil at 0.04 mg kg −1 d.m. soil, which led to a decrease in microbial biomass in loamy-sandy soil, as the microorganisms used more energy to detoxify the environment than for their growth . According to Chamberlain et al. , the composition and diversity of microorganisms in the soil affect the regulation of the main functions of the soil, i.e., the circulation of elements, and these functions in turn indirectly affect the growth and yield of plants by, among other things, providing them with nutrients. Soil enzymes are believed to originate mainly from microorganisms, but also from plant and animal remains entering the soil. They accumulate in the soil as free enzymes or are stabilized on soil organic matter. Soil enzymes are essential for microbial life functions, as they participate in all biochemical processes occurring in the soil, as well as increase the rate of organic matter decomposition reactions, resulting in the release of nutrients into the soil environment. Due to their stability and sensitivity, they are used as indicators of soil health . A study by Filimon et al. showed that difenoconazole introduced into chernozem soil at doses of 37, 75, and 150 mg kg −1 d.m. soil under both field and laboratory conditions of all enzymes tested (dehydrogenases, urease, protease, and acid phosphatase) inhibited enzyme activity, with the most inhibitory effect on dehydrogenase activity. The application of another fungicide myclobutanil at a dose of 0.1 mg kg −1 d.m. soil contributed to an increase in dehydrogenase activity, while already higher doses (1.0 and 10 mg kg −1 ) significantly inhibited the activity of these enzymes . In a study conducted by Satapute et al. determining the effect of propiconazole (doses of 1.0, 15.0, and 20 kg ha −1 ) on enzymes in red sandy-loam black-earth soil, it was found that the tested fungicide stimulated the activity of urease and phosphatases in the first 2 weeks, while after 3 weeks the activity of these enzymes significantly decreased in sites containing doses of 15.0 and 20.0 mg kg −1 of propiconazole. However, the activity of these enzymes was higher in deep-black soil than in red sandy loam. Fungicides can also be taken up by plants, which can lead to an increase in the production of reactive oxygen species, with subsequent inhibition of normal physiological and biochemical processes in plants and disruption of photosynthesis, thereby reducing yields. Examples of the adverse effects of these chemicals on plants include, for example, that of pendimethalin, which significantly inhibits the germination of Zea mays L. seeds as its concentration in the soil increases, or that of fipronil, which significantly reduced the germination of rice seeds compared to control soil . The use of fungicides can cause biochemical and physiological changes in antioxidants, which has an initial effect on plant germination, subsequent growth and development, and ultimately on yield . However, the inactivation of fungicides and their elimination from the soil environment is carried out mainly through microbial processes, as microorganisms are capable of producing enzymes that carry out catabolic processes . It was observed that microorganisms belonging to the Actinomycetota, Proteobacteria, Bacteroidetes, Cyanobacteria, Firmicutes, and Basidiomycota phylum are more abundant in soil contaminated with fungicides than in non-contaminated soil (control soil). Tremendous abilities to degrade fungicides are demonstrated by microorganisms of the genera: Acinetobacter , Achromobacter , Agrobacterium , Alcaligenes , Arthrobacter , Azospirillum , Enterobacter , Bacillus , Burkholderia , Cupriavidus , Flavimonas , Brevibacterium , Flavobacterium , Klebsiella , Micrococcus , Methylobacterium , Mesorhizobium , Ochrobactrum , Peanibacillus , Pseudomonas , Pseudaminobacter, Rhizobium , Ralstonia , Serratia , Shinella , Sphingomonas Streptomyces , Xanthomonas , and Yersinia . The aim of this study was, therefore, to assess the effect of soil amendment with two doses of azoxystrobin (field and contaminating doses) on microbiota, enzymes, and plants 30, 60, and 90 days after their application. The study results will allow for a broader understanding of changes in soil microbial populations and biochemical processes taking place in the soil environment. In addition, they may indicate differences in the sensitivity of microorganisms, enzymes, and plants to azoxystrobin. The research hypotheses formulated have assumed that the accumulation of azoxystrobin in soil causes (a) severe disorders in the microbiome, (b) destabilization of enzyme activity, and (c) inhibition of plant growth and development.
2.1. Response of Soil Microbiota to Azoxystrobin Statistical analysis showed that the number of organotrophic bacteria was most affected by the interaction of the studied factors (η 2 = 42.28%); that of actinobacteria, by soil incubation time (η 2 = 59.00%); and that of fungi, by azoxystrobin doses (η 2 = 51.30%) . Compared to the control soil (soil C), the number of organotrophic bacteria in soil F (field dose) increased 1.2-fold and 1.3-fold on days 30 and 90 of the experiment, respectively, and that of actinobacteria increased 1.9-fold, 1.1-fold, and 1.2-fold on days 30, 60, and 90, respectively, whereas a 1.2-fold decrease was noted in the number of fungi on day 30, a 2.2-fold decrease on day 60, and a 1.4-fold decrease on day 90 . In the case of soil P (polluting dose), on day 30 of the experiment, the numbers of organotrophic bacteria and actinobacteria were 1.6-fold and 1.3-fold higher compared to the control, respectively. Analyses conducted on days 60 and 90 demonstrate the inhibition of soil microbiota proliferation in this soil, as evidenced by a 1.3-fold and 1.2-fold decrease in the abundance of organotrophic bacteria, respectively, and a 1.1-fold decrease in that of actinobacteria on both dates. Fungi responded to the highest dose of azoxystrobin with a decrease in their numbers at all dates of analyses. And so, a 1.6-fold decrease was noted in their number in soil P on day 30, a 4.0-fold decrease on day 60, and a 2.6-fold decrease on day 90, compared to the control soil. Regardless of azoxystrobin dose, the highest mean number of organotrophic bacteria (3.069 × 10 9 cfu kg −1 d.m. soil) and actinobacteria (2.105 × 10 9 cfu kg −1 d.m. soil) was observed on day 90, whereas that of fungi was on day 30 (1.350 × 10 7 cfu kg −1 d.m. soil). The most intensive proliferation in the soil was observed in the case of organotrophic bacteria and was the least intensive one—in the case of fungi. Diversified effects of azoxystrobin over time are confirmed by changes in the population numbers of microorganisms . The inhibition in the counts of organotrophic bacteria ranged from 9.50% (soil F, day 60) to 37.98% (soil P, day 60), in those of actinobacteria from 10.46% (soil P, day 60) to 11.66% (soil P, day 90), and in those of fungi from 14.11% (soil F, day 30) to 74.82% (soil P, day 60 day). In the F and P soil samples, the growth of organotrophic bacteria was stimulated on day 30 and inhibited on day 60. In the case of actinobacteria, an increase in their number was recorded at all test dates in soil F, whereas there was a decrease in soil P on days 60 and 90. In turn, the number of fungi decreased in soil F and P in all terms of analyses (days 30, 60, and 90 of the experiment). The colony development index (CD) of microorganisms was affected to the greatest extent by the incubation time of soil (from 52.75% to 78.63%), to a lesser extent by the interaction of factors (from 8.58% to 12.39%), and to the least extent by azoxystrobin dose (from 1.54% to 11.79%). Taking into account the incubation time of the soil, the highest CD values of organotrophic bacteria (mean CD = 62.758) and actinobacteria (mean CD = 24.623) were recorded on day 90, and the highest CD of fungi (mean CD = 24.832) on day 30. The CD of organotrophic bacteria was the highest in soil C on day 90 (CD = 66.617), that of actinobacteria in soil F on day 90 (CD = 24.861), and that of fungi in soil F on day 30 (CD = 25.645). Of all groups of microorganisms, the highest CD value was computed for organotrophic bacteria, and the lowest one for fungi. Statistical analysis of the observed variance showed that the incubation time of soil had the strongest impact on the ecophysiological diversity index (EP) of organotrophic bacteria (η 2 = 64.3%) and actinobacteria (η 2 = 70.10%), whereas the azoxystrobin dose on EP of fungi was (η 2 = 51.62%). The organotrophic bacteria, actinobacteria, and fungi had the highest EP values in soil C on day 60 (i.e., 0.833, 0.844, and 0.875, respectively). Regardless of azoxystrobin dose applied, the highest EP values were computed for organotrophic bacteria (mean EP = 0.762), actinobacteria (mean EP = 0.825), and fungi (mean EP = 0.761) on day 60 of the experiment. Azoxystrobin also contributed to the soil imbalance, as evidenced by the index of soil return to the equilibrium state—resilience index (RL) . The greatest changes occurred in the fungal population, because the RL values computed on days 60 and 90 were negative (mean RL values were RL = −0.734 and RL = −0.471, respectively). In the case of organotrophic bacteria and actinobacteria, the mean RL values were positive in all terms of analyses. shows a phylogenetic tree of bacteria, and shows a phylogenetic tree of fungi. Soil C was most heavily populated by PP952050.1 Bacillaceae bacterium strain (C), PP952049.1 Bacillus cereus strain (C); and by PP952060.1 Talaromyces pinophilus isolate KF751644.1 PP955260.1; and PP952061.1 Trichoderma pnophilus isolate (C) fungi. In turn, soil F was colonized by PP952047.1 Prestia megaterium strain (F), PP952048.1 Peribacillus simplex (F), PP952058.1 Penicillium chrysogenum isolate (F), and PP952059.1 Talaromyces piinophilus isolate (F), whereas soil P was colonized by PP952052.1 Prestia megaterium strain (P), PP952051.1 Bacillus mycoides strain (P), and PP952062.1 Keratinophyton terreum isolate (P), which may tolerate and degrade azoxystrobin. 2.2. Response of Soil Enzymes to Azoxystrobin In this study, soil incubation time had the greatest impact on soil enzyme activity (η 2 ranged from 87.86% to 99.76%), while the other variables analyzed, namely azoxystrobin dose and interaction of factors, had little effect on soil enzymes . Regardless of the fungicide dose, the highest dehydrogenases activity (mean 28.361 µmol TPF kg −1 d.m. soil h −1 ) and urease activity (mean 2.129 mmol N-NH 4 kg −1 d.m. soil h −1 ) were determined on day 30, whereas the highest catalase activity (mean 0.536 mol O 2 kg −1 d.m. soil h −1 ) and alkaline phosphatase activity (mean 2.369 mmol PNP kg −1 d.m. soil h −1 ) were determined on day 60, and the highest acidic phosphatase activity (mean 1.969 mmol PNP kg −1 d.m. soil h −1 ) was determined on day 90. In soil F, an increase was observed in all terms of analyses (30, 60, and 90 days) in the activity of dehydrogenases, catalase, and alkaline phosphatase compared to soil C, whereas on day 30 there was an increase in alkaline phosphatase activity, and on day 90 there was a decrease in urease activity. In soil P, dehydrogenases activity decreased compared to soil C in all terms of analyses, alkaline phosphatase and acid phosphatase activities on days 30 and 90, and urease activity on day 90 of the experiment. In the same soil, an increase in catalase activity was observed on days 30 and 60, and an increase in alkaline phosphatase on day 60 . Dehydrogenases activity decreased by 0.45% (soil P, day 90) to 3.75% (soil P, day 60), catalase activity by 1.36% (soil P, day 90), alkaline phosphatase activity by 5.39% (soil P, day 90) to 11.45% (soil P day 30), acid phosphatase activity by 0.98% (soil F, day 90) to 8.55% (soil P, day 90), and urease activity by 0.84% (soil P, day 30) to 18.88% (soil P, day 90) . In all analytical terms, the activity of dehydrogenases, catalase, and alkaline phosphatase was stimulated in soil F, whereas dehydrogenases and urease activity were inhibited in soil P. In addition, in soil P, the activity of catalase increased significantly on days 30 and 60, whereas activities of alkaline phosphatase and acid phosphatase were suppressed on days 30 and 90. Azoxystrobin caused significant changes in sandy clay, which is confirmed by the values of the soil resilience index . The greatest disturbance in the soil was noted based on dehydrogenases activity (mean RL = −0.144 on day 60 and RL = −0.037 on day 90) and acid phosphatase activity (mean RL = −0.594 on day 60 and RL = −0.461 on day 90). Adverse changes were also observed in urease activity on day 60 (mean RL = −0.544). The highest mean RL values were determined for catalase activity followed by alkaline phosphatase activity. Their positive RL values indicate that these enzymes are able to return to a state of biochemical equilibrium. 2.3. Pearson’s Simple Correlation Coefficients between Microbiological and Biochemical Soil Parameters The activity of dehydrogenases, alkaline phosphatase, and urease were significantly positively correlated with the EP of organotrophic bacteria and actinobacteria, while negatively correlated with the number of actinobacteria and CD of organotrophic bacteria and actinobacteria. An opposite correlation was found for acid phosphatase. Catalase activity was significantly positively correlated with the abundance of organotrophic bacteria, fungi, and actinobacteria and with the CD of actinobacteria. In addition, the activities of dehydrogenases and alkaline phosphatase were significantly negatively correlated with the count of organotrophic bacteria . 2.4. Response of Plants to Azoxystrobin The percentage of the observed variability (η 2 ) of the factors examined showed that the azoxystrobin dose elicited the greatest changes in plant growth and development . Its dose affected seed germination in 64.89% ( S. saccharatum L.) to 87.57% ( S. alba L.), while it affected root growth in 65.01% ( S. saccharatum L.) to 70.68% ( S. alba L.). The incubation time of the soil modified the seed germination in 2.23% ( S. alba L.) to 23.97% ( S. saccharatum L.) and the root growth in 18.95% ( S. alba L.) to 26.14% ( L. sativum L.). The dose of azoxystrobin and duration of its retention in the soil significantly affected the germination of seeds and the elongation of plant roots . In soil P, the greatest inhibition of the germination of L. sativum L. and S. saccharatum L. seeds occurred on day 90 (by 58.43% and 54.23%, respectively) and of S. alba seeds on day 60 (by 63.92%). The greatest inhibition of root elongation of plants was recorded on day 90. Compared to the control soil, the root elongation decreased by 54.90% for L. sativum L., by 50.92% for S. alba L., and by 53.78% for S. saccharatum L. A significant inhibition of seed germination and extension of plant roots compared to soil C was also observed in the soil F; however, this inhibition was still less than in the soil P.
Statistical analysis showed that the number of organotrophic bacteria was most affected by the interaction of the studied factors (η 2 = 42.28%); that of actinobacteria, by soil incubation time (η 2 = 59.00%); and that of fungi, by azoxystrobin doses (η 2 = 51.30%) . Compared to the control soil (soil C), the number of organotrophic bacteria in soil F (field dose) increased 1.2-fold and 1.3-fold on days 30 and 90 of the experiment, respectively, and that of actinobacteria increased 1.9-fold, 1.1-fold, and 1.2-fold on days 30, 60, and 90, respectively, whereas a 1.2-fold decrease was noted in the number of fungi on day 30, a 2.2-fold decrease on day 60, and a 1.4-fold decrease on day 90 . In the case of soil P (polluting dose), on day 30 of the experiment, the numbers of organotrophic bacteria and actinobacteria were 1.6-fold and 1.3-fold higher compared to the control, respectively. Analyses conducted on days 60 and 90 demonstrate the inhibition of soil microbiota proliferation in this soil, as evidenced by a 1.3-fold and 1.2-fold decrease in the abundance of organotrophic bacteria, respectively, and a 1.1-fold decrease in that of actinobacteria on both dates. Fungi responded to the highest dose of azoxystrobin with a decrease in their numbers at all dates of analyses. And so, a 1.6-fold decrease was noted in their number in soil P on day 30, a 4.0-fold decrease on day 60, and a 2.6-fold decrease on day 90, compared to the control soil. Regardless of azoxystrobin dose, the highest mean number of organotrophic bacteria (3.069 × 10 9 cfu kg −1 d.m. soil) and actinobacteria (2.105 × 10 9 cfu kg −1 d.m. soil) was observed on day 90, whereas that of fungi was on day 30 (1.350 × 10 7 cfu kg −1 d.m. soil). The most intensive proliferation in the soil was observed in the case of organotrophic bacteria and was the least intensive one—in the case of fungi. Diversified effects of azoxystrobin over time are confirmed by changes in the population numbers of microorganisms . The inhibition in the counts of organotrophic bacteria ranged from 9.50% (soil F, day 60) to 37.98% (soil P, day 60), in those of actinobacteria from 10.46% (soil P, day 60) to 11.66% (soil P, day 90), and in those of fungi from 14.11% (soil F, day 30) to 74.82% (soil P, day 60 day). In the F and P soil samples, the growth of organotrophic bacteria was stimulated on day 30 and inhibited on day 60. In the case of actinobacteria, an increase in their number was recorded at all test dates in soil F, whereas there was a decrease in soil P on days 60 and 90. In turn, the number of fungi decreased in soil F and P in all terms of analyses (days 30, 60, and 90 of the experiment). The colony development index (CD) of microorganisms was affected to the greatest extent by the incubation time of soil (from 52.75% to 78.63%), to a lesser extent by the interaction of factors (from 8.58% to 12.39%), and to the least extent by azoxystrobin dose (from 1.54% to 11.79%). Taking into account the incubation time of the soil, the highest CD values of organotrophic bacteria (mean CD = 62.758) and actinobacteria (mean CD = 24.623) were recorded on day 90, and the highest CD of fungi (mean CD = 24.832) on day 30. The CD of organotrophic bacteria was the highest in soil C on day 90 (CD = 66.617), that of actinobacteria in soil F on day 90 (CD = 24.861), and that of fungi in soil F on day 30 (CD = 25.645). Of all groups of microorganisms, the highest CD value was computed for organotrophic bacteria, and the lowest one for fungi. Statistical analysis of the observed variance showed that the incubation time of soil had the strongest impact on the ecophysiological diversity index (EP) of organotrophic bacteria (η 2 = 64.3%) and actinobacteria (η 2 = 70.10%), whereas the azoxystrobin dose on EP of fungi was (η 2 = 51.62%). The organotrophic bacteria, actinobacteria, and fungi had the highest EP values in soil C on day 60 (i.e., 0.833, 0.844, and 0.875, respectively). Regardless of azoxystrobin dose applied, the highest EP values were computed for organotrophic bacteria (mean EP = 0.762), actinobacteria (mean EP = 0.825), and fungi (mean EP = 0.761) on day 60 of the experiment. Azoxystrobin also contributed to the soil imbalance, as evidenced by the index of soil return to the equilibrium state—resilience index (RL) . The greatest changes occurred in the fungal population, because the RL values computed on days 60 and 90 were negative (mean RL values were RL = −0.734 and RL = −0.471, respectively). In the case of organotrophic bacteria and actinobacteria, the mean RL values were positive in all terms of analyses. shows a phylogenetic tree of bacteria, and shows a phylogenetic tree of fungi. Soil C was most heavily populated by PP952050.1 Bacillaceae bacterium strain (C), PP952049.1 Bacillus cereus strain (C); and by PP952060.1 Talaromyces pinophilus isolate KF751644.1 PP955260.1; and PP952061.1 Trichoderma pnophilus isolate (C) fungi. In turn, soil F was colonized by PP952047.1 Prestia megaterium strain (F), PP952048.1 Peribacillus simplex (F), PP952058.1 Penicillium chrysogenum isolate (F), and PP952059.1 Talaromyces piinophilus isolate (F), whereas soil P was colonized by PP952052.1 Prestia megaterium strain (P), PP952051.1 Bacillus mycoides strain (P), and PP952062.1 Keratinophyton terreum isolate (P), which may tolerate and degrade azoxystrobin.
In this study, soil incubation time had the greatest impact on soil enzyme activity (η 2 ranged from 87.86% to 99.76%), while the other variables analyzed, namely azoxystrobin dose and interaction of factors, had little effect on soil enzymes . Regardless of the fungicide dose, the highest dehydrogenases activity (mean 28.361 µmol TPF kg −1 d.m. soil h −1 ) and urease activity (mean 2.129 mmol N-NH 4 kg −1 d.m. soil h −1 ) were determined on day 30, whereas the highest catalase activity (mean 0.536 mol O 2 kg −1 d.m. soil h −1 ) and alkaline phosphatase activity (mean 2.369 mmol PNP kg −1 d.m. soil h −1 ) were determined on day 60, and the highest acidic phosphatase activity (mean 1.969 mmol PNP kg −1 d.m. soil h −1 ) was determined on day 90. In soil F, an increase was observed in all terms of analyses (30, 60, and 90 days) in the activity of dehydrogenases, catalase, and alkaline phosphatase compared to soil C, whereas on day 30 there was an increase in alkaline phosphatase activity, and on day 90 there was a decrease in urease activity. In soil P, dehydrogenases activity decreased compared to soil C in all terms of analyses, alkaline phosphatase and acid phosphatase activities on days 30 and 90, and urease activity on day 90 of the experiment. In the same soil, an increase in catalase activity was observed on days 30 and 60, and an increase in alkaline phosphatase on day 60 . Dehydrogenases activity decreased by 0.45% (soil P, day 90) to 3.75% (soil P, day 60), catalase activity by 1.36% (soil P, day 90), alkaline phosphatase activity by 5.39% (soil P, day 90) to 11.45% (soil P day 30), acid phosphatase activity by 0.98% (soil F, day 90) to 8.55% (soil P, day 90), and urease activity by 0.84% (soil P, day 30) to 18.88% (soil P, day 90) . In all analytical terms, the activity of dehydrogenases, catalase, and alkaline phosphatase was stimulated in soil F, whereas dehydrogenases and urease activity were inhibited in soil P. In addition, in soil P, the activity of catalase increased significantly on days 30 and 60, whereas activities of alkaline phosphatase and acid phosphatase were suppressed on days 30 and 90. Azoxystrobin caused significant changes in sandy clay, which is confirmed by the values of the soil resilience index . The greatest disturbance in the soil was noted based on dehydrogenases activity (mean RL = −0.144 on day 60 and RL = −0.037 on day 90) and acid phosphatase activity (mean RL = −0.594 on day 60 and RL = −0.461 on day 90). Adverse changes were also observed in urease activity on day 60 (mean RL = −0.544). The highest mean RL values were determined for catalase activity followed by alkaline phosphatase activity. Their positive RL values indicate that these enzymes are able to return to a state of biochemical equilibrium.
The activity of dehydrogenases, alkaline phosphatase, and urease were significantly positively correlated with the EP of organotrophic bacteria and actinobacteria, while negatively correlated with the number of actinobacteria and CD of organotrophic bacteria and actinobacteria. An opposite correlation was found for acid phosphatase. Catalase activity was significantly positively correlated with the abundance of organotrophic bacteria, fungi, and actinobacteria and with the CD of actinobacteria. In addition, the activities of dehydrogenases and alkaline phosphatase were significantly negatively correlated with the count of organotrophic bacteria .
The percentage of the observed variability (η 2 ) of the factors examined showed that the azoxystrobin dose elicited the greatest changes in plant growth and development . Its dose affected seed germination in 64.89% ( S. saccharatum L.) to 87.57% ( S. alba L.), while it affected root growth in 65.01% ( S. saccharatum L.) to 70.68% ( S. alba L.). The incubation time of the soil modified the seed germination in 2.23% ( S. alba L.) to 23.97% ( S. saccharatum L.) and the root growth in 18.95% ( S. alba L.) to 26.14% ( L. sativum L.). The dose of azoxystrobin and duration of its retention in the soil significantly affected the germination of seeds and the elongation of plant roots . In soil P, the greatest inhibition of the germination of L. sativum L. and S. saccharatum L. seeds occurred on day 90 (by 58.43% and 54.23%, respectively) and of S. alba seeds on day 60 (by 63.92%). The greatest inhibition of root elongation of plants was recorded on day 90. Compared to the control soil, the root elongation decreased by 54.90% for L. sativum L., by 50.92% for S. alba L., and by 53.78% for S. saccharatum L. A significant inhibition of seed germination and extension of plant roots compared to soil C was also observed in the soil F; however, this inhibition was still less than in the soil P.
3.1. Response of Soil Microbiota to Azoxystrobin Pesticides, including fungicides and their metabolites, can exert an immediate effect on soil microorganisms, triggering changes in their population and diversity . One such pesticide is azoxystrobin, which, 21 and 28 days after application to the soil in a dose of 10 mg kg −1 , significantly reduced the numbers of bacteria and actinobacteria compared to the control soil but had no significant effect on the fungi population . In the present study, azoxystrobin applied in the field dose stimulated the growth of organotrophic bacteria and actinobacteria, but it inhibited the growth of fungi. In turn, its contaminating dose reduced the population numbers of all analyzed groups of microorganisms. A small amount of the fungicide in the soil could have been used by organotrophic bacteria and actinobacteria as a source of nutrients; therefore, the recommended dose of azoxystrobin could increase their numbers . However, a fungicide dose being few or several times higher than the agronomic dose may directly affect the survival of microorganisms by disrupting the metabolic pathways in their cells . The sensitivity of microorganisms to increased amounts of azoxystrobin may be due to oxidative stress generated upon the release of electrons from the respiratory chain in the form of reactive oxygen . Fungicides affect not only the proliferation of microbial populations but also their diversity . They modify the composition of the microbial community because of their effects on non-target organisms . The present study demonstrated differences in the colony development index (CD) and the ecophysiological diversity index (EP) of microorganisms. In the soil treated with azoxystrobin, there was an increase in the CD values computed for organotrophic bacteria, actinobacteria, and fungi compared to the control soil. The r-strategies (fast-growing microorganisms) prevailed among organotrophic bacteria, whereas the k-strategists (slow-growing microorganisms) prevailed among actinobacteria and fungi. This was evidenced by the CDs, the mean values of which were CD = 50.27 for organotrophic bacteria and CD = 23.49 and CD = 24.38 for actinobacteria and fungi, respectively. Therefore, it can be concluded that the addition of chemical compounds to the soil may determine the proportions between r-strategists and k-strategists . Azoxystrobin exerted diverse effects on the EP of microorganisms, which was also caused by the time of its retention in the soil. However, the greatest changes caused by azoxystrobin presence in the soil, which were manifested by EP decrease, were observed in the case of fungi. The adaptation of microorganisms to adverse conditions is largely dependent on their activity, as well as on the degree of severity of the stress factor occurring in the soil environment . Fungicides are degraded in the soil environment by various microorganisms that produce specific enzymes capable of their degradation . For example, Alexandrino et al. demonstrated that bacteria of the genera Pseudomonas , Rhodobacter, Ochobacterum , Comamonas , Hydrogenophaga , Azospirillum , Methylbacillus , and Acinetobacter had high degradation potential against epoxiconazole and fludioxonil, as they degraded 10 mg dm −3 of these fungicides within 21 days. In turn, Feng et al. have reported that Arthrobacter , Bacillus , Cupriavidus , Pseudomonas , Klebsiella , Rhodanobacter , Stenothrofomonas , and Aphanoascus are microorganisms that break down strobilurin compounds. Clinton et al. isolated two species of bacteria from the soil contaminated with trifloxystrobin, namely Bacillus flexus and Bacillus amyloliquefaciens , whereas Howell et al. observed that Cuprividus spp. and Rhodobacter spp. exerted a degrading effect against azoxystrobin. Actinomyces spp. and Ochrabactrum spp. are also capable of degrading azoxystrobin . In our study, we identified microorganisms, i.e., bacteria PP952052.1 Prestia megaterium strain (P) and PP952051.1 Bacillus mycoides strain (P), and fungi PP952062.1 Keratinophyton terreum isolate (P), which may show high tolerance to azoxystrobin and the potential for its degradation. In turn, Feng et al. isolated a strain of bacteria Chrobacrum anthropi SH14 from soil contaminated with azoxystrobin, which was able to degrade 86.30% of the 50 µg cm −3 medium dose of this pesticide within 5 days. 3.2. Response of Soil Enzymes to Azoxystrobin The activity of soil enzymes is closely related to the quality and fertility of the soil. These biological parameters of the soil respond quickly to the effects of high pesticide doses . Wang et al. reported that azoxystrobin, used at doses of 0.1 to 10 mg kg −1 , inhibited dehydrogenases activity in all terms of analyses (i.e., on days 7, 14, 21, and 28) and urease activity up to day 14 while stimulating catalase activity and not significantly affecting protease activity. In the experiment described in this manuscript, the contaminating dose of azoxystrobin inhibited the activity of dehydrogenases, alkaline phosphatase, acid phosphatase, and urease, while its agronomic dose enhanced activities of all analyzed soil enzymes. The inactivating effect of azoxystrobin on soil enzymes may have been due to the inhibition of microbial population multiplication, which indirectly affected the secretion of enzymes whose activity is strongly dependent on the number and biomass of microorganisms . Adverse effects of azoxystrobin (doses: 2.90, 14.65, and 35.00 mg kg −1 ) on the activity of soil enzymes such as dehydrogenases, urease, alkaline phosphatase, acid phosphatase, arylsulfatase, and β -glucosidase were noted in the study by Boteva et al. , with dehydrogenases and arylsulfatase being the most sensitive, and urease being the most resistant to soil treatment with this pesticide. In the present study, catalase was the most resistant to azoxystrobin, as evidenced by its enhanced activity in the soil contaminated with this compound. The increase in its activity may suggest that some microorganisms used azoxystrobin as a source of nutrients and energy necessary for their growth, which contributed to boosted catalase secretion by their cells . The increased catalase production by microorganisms probably caused fungicide degradation and strengthened the protective barrier of microorganisms against oxidizing compounds . The positive or negative effects of azoxystrobin on soil enzymes are mainly related to its dose and duration of its retention in the soil . 3.3. Response of Plants to Azoxystrobin In addition to their antifungal activity, strobilurins improve plant quality by intensifying photosynthesis; increasing contents of nitrogen, chlorophyll, and protein; and delaying the aging of plants . Amaro et al. and Chiu-Yueh et al. have pointed to a very strong effect of fungicides from the group of strobilurin compounds on the physiology and growth of plants. In the present study, azoxystrobin added to soil both in the recommended field dose and the contaminating dose, inhibited seed germination and elongation of L. sativum L., S. alba L., and S. saccharatum L. shoots in all analytical terms (days 30, 60, and 90). Eman et al. determined the effect of azoxystrobin applied at the recommended dose and also at doses 0.5-fold and 2-fold higher than the recommended dose on the germination of Triticum aestivum L. and Raphanus sativus L. seeds. They found that azoxystrobin significantly reduced their seed germination percentage and the length of their roots and shoots. Particularly significant reduction in the length of roots and shoots of Triticum aestivum L. and Raphanus sativus L. was reported at a 2-fold-higher dose than the recommended one, which reduced the length of roots and shoots in wheat by 13.20% and 26.02%, respectively and in radish by 17.67% and 51.67%, respectively. In the present study, an azoxystrobin dose of 32.92 mg kg −1 caused, on average, 50.31% and 45.26% reduction in the length of shoots and roots of L. sativum L., respectively; 57.52% and 48.29% reductions in S. alba L.; and 45.32% and 44.65% reductions in these traits in S. saccharatum L., respectively. The impaired plant growth could have been due to the blocking of the cytochrome bc1 complex, which inhibited cell division and water uptake by plants . Amaro et al. , who assessed the effect of an azoxystrobin dose of 60 g ha −1 , found a reduction in the rate of CO 2 assimilation, transpiration, stomata conductivity, and carbon concentration in cucumber plants.
Pesticides, including fungicides and their metabolites, can exert an immediate effect on soil microorganisms, triggering changes in their population and diversity . One such pesticide is azoxystrobin, which, 21 and 28 days after application to the soil in a dose of 10 mg kg −1 , significantly reduced the numbers of bacteria and actinobacteria compared to the control soil but had no significant effect on the fungi population . In the present study, azoxystrobin applied in the field dose stimulated the growth of organotrophic bacteria and actinobacteria, but it inhibited the growth of fungi. In turn, its contaminating dose reduced the population numbers of all analyzed groups of microorganisms. A small amount of the fungicide in the soil could have been used by organotrophic bacteria and actinobacteria as a source of nutrients; therefore, the recommended dose of azoxystrobin could increase their numbers . However, a fungicide dose being few or several times higher than the agronomic dose may directly affect the survival of microorganisms by disrupting the metabolic pathways in their cells . The sensitivity of microorganisms to increased amounts of azoxystrobin may be due to oxidative stress generated upon the release of electrons from the respiratory chain in the form of reactive oxygen . Fungicides affect not only the proliferation of microbial populations but also their diversity . They modify the composition of the microbial community because of their effects on non-target organisms . The present study demonstrated differences in the colony development index (CD) and the ecophysiological diversity index (EP) of microorganisms. In the soil treated with azoxystrobin, there was an increase in the CD values computed for organotrophic bacteria, actinobacteria, and fungi compared to the control soil. The r-strategies (fast-growing microorganisms) prevailed among organotrophic bacteria, whereas the k-strategists (slow-growing microorganisms) prevailed among actinobacteria and fungi. This was evidenced by the CDs, the mean values of which were CD = 50.27 for organotrophic bacteria and CD = 23.49 and CD = 24.38 for actinobacteria and fungi, respectively. Therefore, it can be concluded that the addition of chemical compounds to the soil may determine the proportions between r-strategists and k-strategists . Azoxystrobin exerted diverse effects on the EP of microorganisms, which was also caused by the time of its retention in the soil. However, the greatest changes caused by azoxystrobin presence in the soil, which were manifested by EP decrease, were observed in the case of fungi. The adaptation of microorganisms to adverse conditions is largely dependent on their activity, as well as on the degree of severity of the stress factor occurring in the soil environment . Fungicides are degraded in the soil environment by various microorganisms that produce specific enzymes capable of their degradation . For example, Alexandrino et al. demonstrated that bacteria of the genera Pseudomonas , Rhodobacter, Ochobacterum , Comamonas , Hydrogenophaga , Azospirillum , Methylbacillus , and Acinetobacter had high degradation potential against epoxiconazole and fludioxonil, as they degraded 10 mg dm −3 of these fungicides within 21 days. In turn, Feng et al. have reported that Arthrobacter , Bacillus , Cupriavidus , Pseudomonas , Klebsiella , Rhodanobacter , Stenothrofomonas , and Aphanoascus are microorganisms that break down strobilurin compounds. Clinton et al. isolated two species of bacteria from the soil contaminated with trifloxystrobin, namely Bacillus flexus and Bacillus amyloliquefaciens , whereas Howell et al. observed that Cuprividus spp. and Rhodobacter spp. exerted a degrading effect against azoxystrobin. Actinomyces spp. and Ochrabactrum spp. are also capable of degrading azoxystrobin . In our study, we identified microorganisms, i.e., bacteria PP952052.1 Prestia megaterium strain (P) and PP952051.1 Bacillus mycoides strain (P), and fungi PP952062.1 Keratinophyton terreum isolate (P), which may show high tolerance to azoxystrobin and the potential for its degradation. In turn, Feng et al. isolated a strain of bacteria Chrobacrum anthropi SH14 from soil contaminated with azoxystrobin, which was able to degrade 86.30% of the 50 µg cm −3 medium dose of this pesticide within 5 days.
The activity of soil enzymes is closely related to the quality and fertility of the soil. These biological parameters of the soil respond quickly to the effects of high pesticide doses . Wang et al. reported that azoxystrobin, used at doses of 0.1 to 10 mg kg −1 , inhibited dehydrogenases activity in all terms of analyses (i.e., on days 7, 14, 21, and 28) and urease activity up to day 14 while stimulating catalase activity and not significantly affecting protease activity. In the experiment described in this manuscript, the contaminating dose of azoxystrobin inhibited the activity of dehydrogenases, alkaline phosphatase, acid phosphatase, and urease, while its agronomic dose enhanced activities of all analyzed soil enzymes. The inactivating effect of azoxystrobin on soil enzymes may have been due to the inhibition of microbial population multiplication, which indirectly affected the secretion of enzymes whose activity is strongly dependent on the number and biomass of microorganisms . Adverse effects of azoxystrobin (doses: 2.90, 14.65, and 35.00 mg kg −1 ) on the activity of soil enzymes such as dehydrogenases, urease, alkaline phosphatase, acid phosphatase, arylsulfatase, and β -glucosidase were noted in the study by Boteva et al. , with dehydrogenases and arylsulfatase being the most sensitive, and urease being the most resistant to soil treatment with this pesticide. In the present study, catalase was the most resistant to azoxystrobin, as evidenced by its enhanced activity in the soil contaminated with this compound. The increase in its activity may suggest that some microorganisms used azoxystrobin as a source of nutrients and energy necessary for their growth, which contributed to boosted catalase secretion by their cells . The increased catalase production by microorganisms probably caused fungicide degradation and strengthened the protective barrier of microorganisms against oxidizing compounds . The positive or negative effects of azoxystrobin on soil enzymes are mainly related to its dose and duration of its retention in the soil .
In addition to their antifungal activity, strobilurins improve plant quality by intensifying photosynthesis; increasing contents of nitrogen, chlorophyll, and protein; and delaying the aging of plants . Amaro et al. and Chiu-Yueh et al. have pointed to a very strong effect of fungicides from the group of strobilurin compounds on the physiology and growth of plants. In the present study, azoxystrobin added to soil both in the recommended field dose and the contaminating dose, inhibited seed germination and elongation of L. sativum L., S. alba L., and S. saccharatum L. shoots in all analytical terms (days 30, 60, and 90). Eman et al. determined the effect of azoxystrobin applied at the recommended dose and also at doses 0.5-fold and 2-fold higher than the recommended dose on the germination of Triticum aestivum L. and Raphanus sativus L. seeds. They found that azoxystrobin significantly reduced their seed germination percentage and the length of their roots and shoots. Particularly significant reduction in the length of roots and shoots of Triticum aestivum L. and Raphanus sativus L. was reported at a 2-fold-higher dose than the recommended one, which reduced the length of roots and shoots in wheat by 13.20% and 26.02%, respectively and in radish by 17.67% and 51.67%, respectively. In the present study, an azoxystrobin dose of 32.92 mg kg −1 caused, on average, 50.31% and 45.26% reduction in the length of shoots and roots of L. sativum L., respectively; 57.52% and 48.29% reductions in S. alba L.; and 45.32% and 44.65% reductions in these traits in S. saccharatum L., respectively. The impaired plant growth could have been due to the blocking of the cytochrome bc1 complex, which inhibited cell division and water uptake by plants . Amaro et al. , who assessed the effect of an azoxystrobin dose of 60 g ha −1 , found a reduction in the rate of CO 2 assimilation, transpiration, stomata conductivity, and carbon concentration in cucumber plants.
4.1. Soil Materials Soil material was collected from the humus-horizontal soil depth of 0 to 20 cm from Tomaszkowo, located in the north-eastern part of Poland (53.71610° N, 20.41670° E). This was soil belonging to the Eutric Cambisols subtype, which was formed on sandy loam (69.41% sand fraction, 27.71% clay fraction, and 2.88% silt fraction) . Selected physicochemical and chemical properties of the soil (soil granulometric composition; pH, hydrolytic acidity; sum of base exchangeable cations bases; organic carbon content; total nitrogen content; and total exchangeable cations K + , Na + , Ca 2+ , and Mg 2+ ) can be found in . The analyses were performed in 3 replicates according to the methodology described in the study by Borowik et al. . 4.2. Azoxystrobin The experiment conducted introduced azoxystrobin into the soil in the form of Amistar 250 SC (azoxystrobin amounts to 250 g dm −3 of the formulation) as a pure substance at rates of 0.110 mg kg −1 (field dose) and 32.92 mg kg −1 (polluting dose). The formulation was manufactured by Syngenta Crop Protection AG (Stein, Switzerland). The formulation was marketed in Poland in 2011, and the distribution authorization was granted to Syngenta (Warsaw, Poland). The expiry date of the authorization of the preparation of Amistar 250 SC by the company Syngenta is 31 December 2025 . The single dose recommended by the manufacturer amounts to 0.5 to 3.0 dm 3 ha −1 . This preparation is used in the protection of crops (winter wheat, spring wheat, rye, winter barley, spring barley, winter triticale, spring triticale, and winter oilseed rape) and vegetable crops (potato, onion, green bean, green pea, head cabbage, Chinese cabbage, cauliflower, carrot, lettuce, tomato, leek, celery, and pepper). The selected physicochemical properties of azoxystrobin are presented in . The structural formula of azoxystrobin was made using ISIS Draw 2.3 . 4.3. Establishment of the Experiment and Procedure for Conducting the Experiment The procedure for setting up an experiment under strictly controlled conditions (laboratory experiment) in 3 replicates for each combination and each test date (27 beakers in total). The procedure for setting up the experiment consisted of weighing 100 g each of air-dried soil put through a sieve (2 mm diameter) into glass beakers (150 cm 3 capacity). In the respective sites, azoxystrobin in the form of an aqueous emulsion was applied once in the following amounts (mg kg −1 d.m. soil): 0.00 mg (soil without added fungicide), 0.110 mg (field dose), and 32.92 mg (polluting dose). The literature generally describes studies on the impact of small doses of azoxystrobin on soil properties and plant development . Therefore, our research aimed to assess the impact of this active substance in contaminating amounts on the biological parameters of the soil. The soil material was thoroughly homogenized and brought to a moisture content of 50% of the capillary water capacity using distilled water. The soil in the beaker was covered with perforated foil and incubated in a thermostat maintaining a constant temperature (25 °C) for 30, 60, and 90 days. Soil moisture was monitored throughout the experiment, and soil losses were replenished. Soil microbiological and enzymatic analyses were performed on three test dates. For the Phytotoxkit tests, a separate batch of the experiment was set up (9 replicates for each combination and each test date, resulting in a total of 81 beakers). A total of 150 g of soil was weighed into each beaker. The conditions for setting up and running the experiment were identical to those for the soil used for microbiological and enzymatic analysis. 4.4. Conducting Microbiological Analysis of Soil At three study dates (30, 60, and 90 days), soil microbiological analysis was carried out using the serial dilution method. Into 90 cm 3 of sterile saline (0.85% NaCl) were weighed 10 g of soil of each sample analyzed; then, the whole was mixed on a shaker (120 rpm for 30 min), and a series of dilutions were made. An amount of 1 cm 3 of the specified dilution (organotrophic bacteria and actinobacteria—10 −5 , fungi—10 −3 ) and 17 cm 3 of selective medium were introduced into sterile Petri dishes in parallel. Bunt and Rovira medium for organotrophic bacteria, Küster and Williams medium for actinobacteria, and Martin medium for fungi were used for culture. The microbial material was incubated for 10 days in a thermostat at 28 °C; the grown colonies of microorganisms were counted each day. The composition of the microbial media is presented in . The exact procedure for performing the microbiological analysis is described according to Kucharski et al. and Wyszkowska et al. . These analyses were performed in 9 replicates for each combination. Each day, the grown colonies of microorganisms were counted. The number of microorganisms was expressed in colony-forming units per kg of soil (cfu kg −1 d.m. soil). 4.5. Isolation of Microorganisms from Soil and Their Identification On day 90 of the experiment, bacteria and fungi were isolated from the control soil and the soil containing azoxystrobin in the amounts of 0.110 mg kg −1 and 32.92 mg kg −1 by serial dilution. Isolation of bacteria and fungi was carried out by making serial dilutions by suspending 10 g of each of the soil samples analyzed in sterile saline (0.85% NaCl) (1:10 ratio). The prepared dilutions (bacteria—10 −5 and fungi—by 10 −3 ) were introduced at a rate of 1 cm 3 into a Petri dish (3 repetitions). PCA medium was used to grow bacteria, while fungi were grown in Sabouraud medium, the composition of which is presented in . The prepared microbial material was incubated at 37 °C (from 24 to 48 h). Serial passaging of characteristic colonies of microorganisms was performed to obtain pure cultures. Genomic DNA was isolated using a Bead-Beat Micro Gravity kit (A&A Biotechnology, Gdansk, Poland), which separated DNA by electrophoresis in a 1.0% agarose gel (5 mm 3 sample per gel). For the PCR reaction, a reaction mixture of the following composition was used: 5 mm 3 (~50 ng) of genomic DNA, 25 mm 3 of 2× PCR Master Mix Plus High GC (A&A Biotechnology, Gdansk, Poland), 0.2 mm 3 of each primer at 100 μM, and 19.6 mm 3 of sterile water. B-all For (GAG TTT GAT CCT GGC TCA G) and B-all Rev (ACG GCT ACC TTA CGA CTT) primers were used to isolate the 16S rRNA gene of bacteria, while ITS1 (TTC GTA GGT GAA CCT GCG G) and ITS4 (TCC TCC GCT TAT TGA TAT GC) primers were used to isolate the ITS region of fungi. Conditions for the PCR reaction can be found in . After the PCR reaction on an agarose gel (2.0%), the reaction mixture was separated (2 mm 3 of sample per gel), and the amplified DNA fragments were purified using the Clean-Up AX kit (A&A Biotechnology, Poland). The resulting PCR products were resuspended in 10.0 mM Tris-HCl pH 8.0 and diluted to a concentration of 100 ng mm −3 . DNA sequencing was performed by Macrogen (Amsterdam, Netherlands) on a 3730 XL Analyzer DNA analyzer (Life Technologies Holding Pte Ltd., Singapore) . The DNA sequences obtained were compared with GenBank (National Center of Biotechnology Information) data. The DNA sequences of the 16S rRNA subunit of bacteria were compared using BLAST (Basic Local Alignment Search Tool) software [ https://blast.ncbi.nlm.nih.gov/Blast.cgi (accessed on 1 July 2024)], while the ITS regions of fungi were compared using Internal Transcribed Spacer software [ https://www.applied-maths.com/download/software (accessed on 1 July 2024)]. The access in the GenBank database for the nucleotide sequences of bacteria are under numbers ranging from PP952047 to PP952052 [ https://www.ncbi.nlm.nih.gov/nuccore/PP952047.1:PP952052 (accessed on 1 July 2024), https://www.ncbi.nlm.nih.gov/nuccore (accessed on 1 July 2024)], while those of fungi are under numbers ranging from PP952058 to PP952062 [ https://www.ncbi.nlm.nih.gov/nuccore/PP952058.1:PP52062.1 (accessed on 1 July 2024)]. Based on the obtained nucleotide sequences of the identified microorganisms, a phylogenetic tree was created using the neighbor-joining method with the MEGA 11 software [ https://www.megasoftware.net/show_eua (accessed on 1 July 2024)] . The conditions for creating the phylogenetic tree in MEGA 11 software were as follows: statistical method—neighbor-joining (NJ); scope—all selected taxa; test phylogeny—bootstrap method (no. of bootstrap reconstruction—500 for bacteria and fungi); substitution type—nucleotide; model/methods—Maximum Composite Likelihood; substitution of include—d: transitions and transversion; rates among sites—uniform rates; pattern among lineages—same (homogenous); gaps/missing data treatment—partial deletion; select codon positions—1st + 2nd + 3rd + noncoding; number of threads—1; searching NJ tree—100%, by sequence difference—0.066 for bacteria and 0.036 for fungi. 4.6. Conducting Enzymatic Analysis of Soil Enzymatic analyses of the soil were performed at 30, 60, and 90 days (9 replicates) determining the activity of dehydrogenases, catalase, alkaline phosphatase, acid phosphatase, and urease. Then, 2,3,5-triphenyl tetrazolium chloride was used to determine dehydrogenases activity, hydrogen peroxide—catalase, 4-nitrophenyl phosphate disodium—alkaline phosphatase, acid phosphatase, and urea—urease. Dehydrogenase activity was determined by the Öhlinger method, catalase by the Johnson and Temple method, and alkaline phosphatase and urease by the Alef and Nannpieri method. These analyses were performed according to the procedure given in the studies by Wyszkowska et al. and Wyszkowska et al. . 4.7. The Effect of Azoxystrobin on Seed Germination and Plant Root Elongation The effects of azoxystrobin on the growth and development of plants ( Lepidium sativum L., Sinapsis alba L., and Sorgum saccharatum L.) were assessed at specific test dates (30, 60, and 90 days) using the Phytotoxkit test. The soil (110 g) from the control (soil without added fungicide) and with azoxystrobin at 0.110 mg kg −1 and 32.92 mg kg −1 was introduced onto plastic plates. Then, 10 seeds of each plant were placed on moist filter paper in 3 replicates. The material thus prepared was incubated for 72 h (temperature 25 °C). The shoot and root lengths of the test plants were then measured. 4.8. Calculation of Results The following soil biological indices were calculated at each test date (30, 60, and 90 days): ▪ Colony development index (CD) of microorganisms : CD values range from 0 to 100. CD values close to 100 indicate rapid growth of the microorganism population in a short period. (1) C D = N 1 1 + N 2 2 + N 3 3 + ⋯ + N 10 10 × 100 where N 1 , N 2 , N 3 , …, N 10 —is the ratio of the total number of microbial colonies grown (1, 2, 3, …, 10th day of incubation) to the total number of microbial colonies grown over the entire incubation period (10 days); ▪ The ecophysiological diversity index (EP) of microorganisms takes values from 0 to 1, measuring the stability and homogeneity of microorganisms over time. EP values close to 1 indicate steady growth of microorganisms in the environment. (2) E P = − Ʃ ( p i × l o g 10 p i ) where p i is the ratio of the sum of the number of microbial colonies grown per incubation day to the total number of microbial colonies grown over the entire incubation period (10 days); ▪ Changes (Ch a ) of microbial abundance, enzyme activity, seed germination, and root elongation in soil caused by azoxystrobin: A positive Ch a value indicates stimulation of the analyzed parameters under the influence of azoxystrobin, while a negative value indicates inhibition. (3) C h a = ( A − C ) C × 100 % where A represents values of analyzed parameters in the soil with azoxystrobin, and C represents values of analyzed parameters in the control soil; ▪ The resilience index (RL) of azoxystrobin-treated soil is determined by microbial abundance and enzyme activity . RL values range from −1 to 1. A RL value close to −1 indicates that the soil is not returning to equilibrium. A RL value close to 1 indicates that the soil is returning to equilibrium. A RL value close to 0 indicates that the soil is out of or slightly out of equilibrium. (4) RLat t x = 2 | D 0 | ( | C 0 | + | D x | ) where D 0 is the difference in soil microbial numbers and enzyme activity between a control soil sample (C 0 ) and an azoxystrobin-treated soil sample (t 0 ), and D x is the difference in soil microbial numbers and enzyme activity between a control and an azoxystrobin-treated sample after 60 and 90 days of soil incubation. 4.9. Statistical Analyses of Results The results obtained were statistically processed with a two-factor (factor 1: azoxystrobin dose, factor 2: soil incubation time) ANOVA analysis of variance ( p ≤ 0.01) using Statistica 13.3 software . The percentage of observed variability in the soil parameters studied was determined using the η 2 coefficient. Homogeneous groups were calculated using Tukey’s test at p ≤ 0.01 to determine the most significant differences between mean values. Pearson’s simple correlation coefficients between microbiological and biochemical soil parameters were presented as a heat map.
Soil material was collected from the humus-horizontal soil depth of 0 to 20 cm from Tomaszkowo, located in the north-eastern part of Poland (53.71610° N, 20.41670° E). This was soil belonging to the Eutric Cambisols subtype, which was formed on sandy loam (69.41% sand fraction, 27.71% clay fraction, and 2.88% silt fraction) . Selected physicochemical and chemical properties of the soil (soil granulometric composition; pH, hydrolytic acidity; sum of base exchangeable cations bases; organic carbon content; total nitrogen content; and total exchangeable cations K + , Na + , Ca 2+ , and Mg 2+ ) can be found in . The analyses were performed in 3 replicates according to the methodology described in the study by Borowik et al. .
The experiment conducted introduced azoxystrobin into the soil in the form of Amistar 250 SC (azoxystrobin amounts to 250 g dm −3 of the formulation) as a pure substance at rates of 0.110 mg kg −1 (field dose) and 32.92 mg kg −1 (polluting dose). The formulation was manufactured by Syngenta Crop Protection AG (Stein, Switzerland). The formulation was marketed in Poland in 2011, and the distribution authorization was granted to Syngenta (Warsaw, Poland). The expiry date of the authorization of the preparation of Amistar 250 SC by the company Syngenta is 31 December 2025 . The single dose recommended by the manufacturer amounts to 0.5 to 3.0 dm 3 ha −1 . This preparation is used in the protection of crops (winter wheat, spring wheat, rye, winter barley, spring barley, winter triticale, spring triticale, and winter oilseed rape) and vegetable crops (potato, onion, green bean, green pea, head cabbage, Chinese cabbage, cauliflower, carrot, lettuce, tomato, leek, celery, and pepper). The selected physicochemical properties of azoxystrobin are presented in . The structural formula of azoxystrobin was made using ISIS Draw 2.3 .
The procedure for setting up an experiment under strictly controlled conditions (laboratory experiment) in 3 replicates for each combination and each test date (27 beakers in total). The procedure for setting up the experiment consisted of weighing 100 g each of air-dried soil put through a sieve (2 mm diameter) into glass beakers (150 cm 3 capacity). In the respective sites, azoxystrobin in the form of an aqueous emulsion was applied once in the following amounts (mg kg −1 d.m. soil): 0.00 mg (soil without added fungicide), 0.110 mg (field dose), and 32.92 mg (polluting dose). The literature generally describes studies on the impact of small doses of azoxystrobin on soil properties and plant development . Therefore, our research aimed to assess the impact of this active substance in contaminating amounts on the biological parameters of the soil. The soil material was thoroughly homogenized and brought to a moisture content of 50% of the capillary water capacity using distilled water. The soil in the beaker was covered with perforated foil and incubated in a thermostat maintaining a constant temperature (25 °C) for 30, 60, and 90 days. Soil moisture was monitored throughout the experiment, and soil losses were replenished. Soil microbiological and enzymatic analyses were performed on three test dates. For the Phytotoxkit tests, a separate batch of the experiment was set up (9 replicates for each combination and each test date, resulting in a total of 81 beakers). A total of 150 g of soil was weighed into each beaker. The conditions for setting up and running the experiment were identical to those for the soil used for microbiological and enzymatic analysis.
At three study dates (30, 60, and 90 days), soil microbiological analysis was carried out using the serial dilution method. Into 90 cm 3 of sterile saline (0.85% NaCl) were weighed 10 g of soil of each sample analyzed; then, the whole was mixed on a shaker (120 rpm for 30 min), and a series of dilutions were made. An amount of 1 cm 3 of the specified dilution (organotrophic bacteria and actinobacteria—10 −5 , fungi—10 −3 ) and 17 cm 3 of selective medium were introduced into sterile Petri dishes in parallel. Bunt and Rovira medium for organotrophic bacteria, Küster and Williams medium for actinobacteria, and Martin medium for fungi were used for culture. The microbial material was incubated for 10 days in a thermostat at 28 °C; the grown colonies of microorganisms were counted each day. The composition of the microbial media is presented in . The exact procedure for performing the microbiological analysis is described according to Kucharski et al. and Wyszkowska et al. . These analyses were performed in 9 replicates for each combination. Each day, the grown colonies of microorganisms were counted. The number of microorganisms was expressed in colony-forming units per kg of soil (cfu kg −1 d.m. soil).
On day 90 of the experiment, bacteria and fungi were isolated from the control soil and the soil containing azoxystrobin in the amounts of 0.110 mg kg −1 and 32.92 mg kg −1 by serial dilution. Isolation of bacteria and fungi was carried out by making serial dilutions by suspending 10 g of each of the soil samples analyzed in sterile saline (0.85% NaCl) (1:10 ratio). The prepared dilutions (bacteria—10 −5 and fungi—by 10 −3 ) were introduced at a rate of 1 cm 3 into a Petri dish (3 repetitions). PCA medium was used to grow bacteria, while fungi were grown in Sabouraud medium, the composition of which is presented in . The prepared microbial material was incubated at 37 °C (from 24 to 48 h). Serial passaging of characteristic colonies of microorganisms was performed to obtain pure cultures. Genomic DNA was isolated using a Bead-Beat Micro Gravity kit (A&A Biotechnology, Gdansk, Poland), which separated DNA by electrophoresis in a 1.0% agarose gel (5 mm 3 sample per gel). For the PCR reaction, a reaction mixture of the following composition was used: 5 mm 3 (~50 ng) of genomic DNA, 25 mm 3 of 2× PCR Master Mix Plus High GC (A&A Biotechnology, Gdansk, Poland), 0.2 mm 3 of each primer at 100 μM, and 19.6 mm 3 of sterile water. B-all For (GAG TTT GAT CCT GGC TCA G) and B-all Rev (ACG GCT ACC TTA CGA CTT) primers were used to isolate the 16S rRNA gene of bacteria, while ITS1 (TTC GTA GGT GAA CCT GCG G) and ITS4 (TCC TCC GCT TAT TGA TAT GC) primers were used to isolate the ITS region of fungi. Conditions for the PCR reaction can be found in . After the PCR reaction on an agarose gel (2.0%), the reaction mixture was separated (2 mm 3 of sample per gel), and the amplified DNA fragments were purified using the Clean-Up AX kit (A&A Biotechnology, Poland). The resulting PCR products were resuspended in 10.0 mM Tris-HCl pH 8.0 and diluted to a concentration of 100 ng mm −3 . DNA sequencing was performed by Macrogen (Amsterdam, Netherlands) on a 3730 XL Analyzer DNA analyzer (Life Technologies Holding Pte Ltd., Singapore) . The DNA sequences obtained were compared with GenBank (National Center of Biotechnology Information) data. The DNA sequences of the 16S rRNA subunit of bacteria were compared using BLAST (Basic Local Alignment Search Tool) software [ https://blast.ncbi.nlm.nih.gov/Blast.cgi (accessed on 1 July 2024)], while the ITS regions of fungi were compared using Internal Transcribed Spacer software [ https://www.applied-maths.com/download/software (accessed on 1 July 2024)]. The access in the GenBank database for the nucleotide sequences of bacteria are under numbers ranging from PP952047 to PP952052 [ https://www.ncbi.nlm.nih.gov/nuccore/PP952047.1:PP952052 (accessed on 1 July 2024), https://www.ncbi.nlm.nih.gov/nuccore (accessed on 1 July 2024)], while those of fungi are under numbers ranging from PP952058 to PP952062 [ https://www.ncbi.nlm.nih.gov/nuccore/PP952058.1:PP52062.1 (accessed on 1 July 2024)]. Based on the obtained nucleotide sequences of the identified microorganisms, a phylogenetic tree was created using the neighbor-joining method with the MEGA 11 software [ https://www.megasoftware.net/show_eua (accessed on 1 July 2024)] . The conditions for creating the phylogenetic tree in MEGA 11 software were as follows: statistical method—neighbor-joining (NJ); scope—all selected taxa; test phylogeny—bootstrap method (no. of bootstrap reconstruction—500 for bacteria and fungi); substitution type—nucleotide; model/methods—Maximum Composite Likelihood; substitution of include—d: transitions and transversion; rates among sites—uniform rates; pattern among lineages—same (homogenous); gaps/missing data treatment—partial deletion; select codon positions—1st + 2nd + 3rd + noncoding; number of threads—1; searching NJ tree—100%, by sequence difference—0.066 for bacteria and 0.036 for fungi.
Enzymatic analyses of the soil were performed at 30, 60, and 90 days (9 replicates) determining the activity of dehydrogenases, catalase, alkaline phosphatase, acid phosphatase, and urease. Then, 2,3,5-triphenyl tetrazolium chloride was used to determine dehydrogenases activity, hydrogen peroxide—catalase, 4-nitrophenyl phosphate disodium—alkaline phosphatase, acid phosphatase, and urea—urease. Dehydrogenase activity was determined by the Öhlinger method, catalase by the Johnson and Temple method, and alkaline phosphatase and urease by the Alef and Nannpieri method. These analyses were performed according to the procedure given in the studies by Wyszkowska et al. and Wyszkowska et al. .
The effects of azoxystrobin on the growth and development of plants ( Lepidium sativum L., Sinapsis alba L., and Sorgum saccharatum L.) were assessed at specific test dates (30, 60, and 90 days) using the Phytotoxkit test. The soil (110 g) from the control (soil without added fungicide) and with azoxystrobin at 0.110 mg kg −1 and 32.92 mg kg −1 was introduced onto plastic plates. Then, 10 seeds of each plant were placed on moist filter paper in 3 replicates. The material thus prepared was incubated for 72 h (temperature 25 °C). The shoot and root lengths of the test plants were then measured.
The following soil biological indices were calculated at each test date (30, 60, and 90 days): ▪ Colony development index (CD) of microorganisms : CD values range from 0 to 100. CD values close to 100 indicate rapid growth of the microorganism population in a short period. (1) C D = N 1 1 + N 2 2 + N 3 3 + ⋯ + N 10 10 × 100 where N 1 , N 2 , N 3 , …, N 10 —is the ratio of the total number of microbial colonies grown (1, 2, 3, …, 10th day of incubation) to the total number of microbial colonies grown over the entire incubation period (10 days); ▪ The ecophysiological diversity index (EP) of microorganisms takes values from 0 to 1, measuring the stability and homogeneity of microorganisms over time. EP values close to 1 indicate steady growth of microorganisms in the environment. (2) E P = − Ʃ ( p i × l o g 10 p i ) where p i is the ratio of the sum of the number of microbial colonies grown per incubation day to the total number of microbial colonies grown over the entire incubation period (10 days); ▪ Changes (Ch a ) of microbial abundance, enzyme activity, seed germination, and root elongation in soil caused by azoxystrobin: A positive Ch a value indicates stimulation of the analyzed parameters under the influence of azoxystrobin, while a negative value indicates inhibition. (3) C h a = ( A − C ) C × 100 % where A represents values of analyzed parameters in the soil with azoxystrobin, and C represents values of analyzed parameters in the control soil; ▪ The resilience index (RL) of azoxystrobin-treated soil is determined by microbial abundance and enzyme activity . RL values range from −1 to 1. A RL value close to −1 indicates that the soil is not returning to equilibrium. A RL value close to 1 indicates that the soil is returning to equilibrium. A RL value close to 0 indicates that the soil is out of or slightly out of equilibrium. (4) RLat t x = 2 | D 0 | ( | C 0 | + | D x | ) where D 0 is the difference in soil microbial numbers and enzyme activity between a control soil sample (C 0 ) and an azoxystrobin-treated soil sample (t 0 ), and D x is the difference in soil microbial numbers and enzyme activity between a control and an azoxystrobin-treated sample after 60 and 90 days of soil incubation.
The results obtained were statistically processed with a two-factor (factor 1: azoxystrobin dose, factor 2: soil incubation time) ANOVA analysis of variance ( p ≤ 0.01) using Statistica 13.3 software . The percentage of observed variability in the soil parameters studied was determined using the η 2 coefficient. Homogeneous groups were calculated using Tukey’s test at p ≤ 0.01 to determine the most significant differences between mean values. Pearson’s simple correlation coefficients between microbiological and biochemical soil parameters were presented as a heat map.
Azoxystrobin was observed to induce changes in soil microbiome and enzymatic activity and also in plant growth and development over time. Its field dose (0.110 mg kg −1 ) increased the numbers of organotrophic bacteria and actinobacteria, the CD values, and the activity of soil enzymes. In turn, it reduced the number of fungi, decreased the EP values, and inhibited seed germination and root elongation of the tested plants. In turn, its contaminating dose (32.92 mg kg −1 ) reduced the number of fungi; suppressed activities of dehydrogenases, alkaline phosphatase, acid phosphatase, and urease; and decreased the EP values, while increasing the CD values and enhancing catalase activity. In addition, it significantly inhibited seed germination and root elongation of Lepidium sativum L., Sinapsis alba L., and Sorgum saccharatum . It was observed that control soil (soil not contaminated with the fungicide) was most heavily populated by the genera bacteria PP952050.1 Bacillaceae bacterium strain (C), PP952049.1 Bacillus cereus strain (C), fungi PP952060.1 Talaromyces pinophilus isolate (C), and PP952061.1 Trichoderma viride isolate (C), while the contaminated soil was most heavily populated by bacteria PP952052.1 Prestia megaterium strain (P), PP952051.1 Bacillus mycoides strain (P), and fungi PP952062.1 Keratinophyton terreum isolate (P). The effects of azoxystrobin on the microbiota, enzymes, and plants varied over time, depending on dose, the species of microorganisms and plants, and enzyme type. The study results indicate that azoxystrobin can trigger significant changes in soil biological parameters, particularly when applied in the contaminating dose. It caused permanent disorders in the growth of fungi and the activity of dehydrogenases, acid phosphatase, and urease, as evidenced by negative values of the RL. The identification of bacteria and fungi in the soil containing azoxystrobin can be harnessed to restore soils contaminated with this fungicide by their bioaugmentation with resistant and degrading strains.
|
Exploiting the potential of | fc8764c7-b0e4-4c13-90b0-f6426be5125a | 11899226 | Pharmacology[mh] | Patient adherence to a treatment regimen remains one of key challenges within the pharmaceutical community as it significantly influences therapeutic outcomes, especially in the treatment of chronic disorders and diseases (Bassand et al., ). Managing chronic illnesses demands strict adherence to a treatment regimen, which commonly includes multiple daily or weekly doses to maintain constant therapeutic levels of the drug in the human body. This is of particular importance for drugs susceptible to rapid in vivo clearance (Jindal et al., ). Yet, compliance with extended therapeutic regimen typically stands at approximately 50%, even in developed countries (Sabaté, ). The fast-growing field of long-acting depot drug delivery systems for subcutaneous (SC) administration holds the potential to revolutionize the current treatment of chronic conditions (Nkanga et al., ; Dubbelboer & Sjögren, ; Rama & Ribeiro, ). Long-acting depots provide minimized dosing frequency and with that associated reduced side effects, while the SC delivery route enables quick and convenient self-administration. Ultimately, these benefits lead to improved patient adherence and associated therapeutic outcomes with reduced overall healthcare costs (van den Bemt et al., ; Li et al., ). A plethora of different drug delivery technologies have been proposed for the formulation of long-acting depot drug delivery systems up to now (Chaudhary et al., ; Park et al., ; Lou et al., ). Simple oil-based solutions and suspensions were initially introduced. Following this, polymeric microspheres were utilized. Lately, in situ forming drug delivery systems have emerged as an attractive platform where the depot is formed upon injection due to a phase transition trigger such as body biological fluid, enzyme catalysis, physiological temperature, or pH (Jain & Jindal, ). Among them, injectable in situ forming liquid crystalline systems have acquired a high appeal due to remarkable structural features, resulting in unique functionalities (Rahnfeld & Luciani, ; Sharma et al., ). Liquid crystalline mesophases are formed spontaneously by the self-assembly of amphiphilic lipids upon contact with aqueous environment providing a liquid crystalline gel with embedded drug molecules. The sustained drug release is a result of the slow drug diffusion through the network of nanochannels in the mesophases, whereby the channel size can be adjusted by changing the composition or other conditions (Shanmugam & Banerjee, ). Various mesophases can be formed corresponding to molecular shapes, local and global packing constraints, and the average interfacial mean curvature (Mezzenga et al., ). In accordance, lyotropic liquid crystals (LCCs) can be classified into lamellar, hexagonal, and cubic mesophases. For the development of long-acting depot drug delivery systems for SC administration hexagonal and cubic LCCs are of prime interest due to their highly ordered microstructure. Hexagonal mesophases are defined as cylindrical micelles packed in an infinite two-dimensional lattice arrangement of a hexagonal pattern, while cubic mesophases consist of three-dimensional structures formed by a continuous curved lipid bilayer with non-contacting aqueous channels (Chavda et al., ). Hexagonal and cubic LCCs are thermodynamically stable colloidal systems and, owing to their high degree of order, they are less prone to fusion or aggregation as well as drug leakage when compared to other in situ forming drug delivery systems (Allegritti et al., ). In addition, their tunable properties concerning their microstructural organization enable relatively high loading and predictable release of small molecules as well as biomolecules such as proteins and peptides (Clogston & Caffrey, ; Chavda et al., ). In keeping with this, in situ forming LCCs as long-acting depot drug delivery systems are especially relevant for peptides as there is a great interest for the improvement of their pharmacokinetic properties (Tiwari et al., ; Wang et al., ). Namely, despite their high efficacy and low toxicity, their use is limited by their short plasma half-life due to significant metabolism by enzymes in vivo as well as their inherent poor physical and chemical stability. Given their immense therapeutic potential, market prospects, and economic value, there is a high demand for novel drug delivery systems that would effectively address the challenges described above (Gonella et al., ; Yang et al., ). Thymosin alpha 1 (Tα1), an immunomodulatory bioactive peptide with 28 amino acid residues, characterized by a short plasma half-life and with that associated twice-weekly SC administration regimen (Dominari et al., ), was chosen as a model peptide drug in this study. Herein, we thus aimed to exploit the potential of liquid crystalline platform for development of patient-friendly in situ forming system for SC administration, designed to achieve sustained release of the peptide drug Tα1. Accordingly, our study prioritized the following main objectives: (i) optimal rheological properties of nonaqueous precursor formulation for SC injection, (ii) easy and quick in situ phase transition to hexagonal and/or cubic LCCs triggered merely by water absorption, (iii) sustained release kinetics of peptide drug Tα1 that would notably minimize its dosing frequency.
Materials Ethanol (96% v/v) was provided by Pharmachem, Ljubljana, Slovenia. Lipoid ® S-100, soybean lecithin with not less than 94% (w/w) phosphatidylcholine content, was supplied from Lipoid GmbH, Ludwigshafen, Germany. According to the manufacturer’s specification the fatty acids of the two acyl groups of phosphatidylcholine are palmitic (15%), stearic (3%), oleic and isomers (12%), linoleic (62%), and α-linolenic (5%). Glycerol monooleate, type 40 (Peceol), and glycerol monolinoleate (Maisine ® CC) were obtained as a gift sample from Gattefossé SAS, Saint-Priest, France. According to the manufacturer’s specification, the first consists of monoglycerides (32–52%), diglycerides (30–50%) and triglycerides (5–20%) of mainly oleic (C 18:1 ) acid and the latter consist of monoglycerides (32–52%), diglycerides (40–55%) and triglycerides (5–20%) of mainly linoleic (C 18:2 ) and oleic (C 18:1 ) acids. Tα1, a 28 amino acid sequence peptide, was purchased from Pure Peptides UK, Epsom, United Kingdom. Bidistilled water and phosphate buffered solution (PBS), respectively, were used throughout the experiments. All other chemicals and reagents were of analytical grade. For polarized light microscope examination, gelation time measurements, gelation test, and water uptake evaluation PBS with pH = 7.4 was used to mimic the subcutaneous environment at the injection site. As for the in vitro release testing, PBS with pH = 6.8 containing 5% (m/m) of ethanol was used as the release medium for the stability improvement of the peptide drug Tα1. Pseudoternary phase diagram construction In order to determine the concentration ranges of the LCCs formation, four pseudoternary phase diagrams were constructed by a water titration method. Each diagram depicted three phases, with certain phases consisting of different combinations of ingredients. The first phase was composed of either a hydrotropic substance, i.e. ethanol, or a mixture of a hydrotropic substance and an amphiphile, i.e. ethanol and lecithin with a mass ratio of 1/1. The next phase was designated as the lipid phase consisting of glycerol monooleate or glycerol monolinoleate. Finally, the third vertex of the diagram was represented by the hydrophilic phase. Nine dilution lines were constructed for each diagram, and the starting point of each line denoted a precursor formulation. All precursor formulations were thus composed of ethanol or ethanol-lecithin mixture and glycerol monooleate or glycerol monolinoleate with mass ratios ranging from 90/10 to 10/90%. For the titration process, each precursor formulation was slowly titrated with aliquots of the hydrophilic phase and stirred at room temperature for a sufficient time to obtain equilibrium. Subsequently, samples were checked for homogeneity, consistency, and appearance. Homogeneous, highly viscous, and opaque samples were characterized as LCCs. Next, samples visually identified as LCCs were further checked under a polarized light microscope at 25 °C and 37 °C for the presence of liquid crystalline mesophases. Based on the obtained results, eight precursor formulations were then selected for further characterization. The composition of these selected studied precursor formulations is detailed in , while the preparation procedure is outlined in the following section. Sample preparation Unloaded precursor formulations were prepared by mixing appropriate amounts of ethanol or ethanol-lecithin mixture and glycerol monooleate or glycerol monolinoleate for a sufficient time to form a homogeneous system. In the case of Tα1-loaded precursor formulations containing 1.6 mg/g of peptide drug (i.e. dose of reference medicine) (Dominari et al., ), Tα1 was first dissolved in ethanol or ethanol-lecithin mixture. The resulting solution was then mixed with appropriate amount of glycerol monooleate or glycerol monolinoleate for a sufficient time until a homogeneous system was obtained. Both unloaded and Tα1-loaded precursor formulations were prepared at room temperature. Polarized light microscopy Polarized light microscopy (PLM) was used for phase transition analysis of the samples from the pseudoternary phase diagram construction as well as the gelation test. In the former, samples that were identified as LCCs based on macroscopic examination of mixtures formed along dilution lines of the pseudoternary phase diagrams were examined at 25 °C and 37 °C to assess phase transition and presence of liquid crystalline mesophases, respectively. In the latter, phase transition of precursor formulations upon contact with PBS at predetermined time points (1, 6, 12, 24, 36, 48 and 72 hours, and 7, 10 and 14 days) was examined at 37 °C. PLM was performed using a CX31-P Upright Microscope (Olympus, Tokyo, Japan). The magnification was 40×. Gelation time measurements Gelation time is the time required for a precursor formulation to convert into an in situ formed gel upon contact with an excess aqueous medium. Within the scope of gelation time measurements, 0.5 mL of each precursor formulation was injected with a 25-gauge needle into 5 mL of PBS preheated at 37 °C. The time upon contact of a liquid transparent precursor formulation with the aqueous medium until complete transformation into an opaque in situ formed gel was recorded as the gelation time (Mei et al., ). For each precursor formulation, the measurement was performed in triplicate. Gelation test The ability to form and maintain an in situ formed gel of a precursor formulation upon contact with excess aqueous medium for a prolonged period of time was evaluated based on the macroscopic appearance of in situ formed gels at predetermined time points (1, 6, 12, 24, 36, 48 and 72 hours, and 7, 10 and 14 days). In addition, for better visualization, in situ formed gels in vials immediately after injection were also photographed. 0.5 mL of each precursor formulation was injected with a 25-gauge needle into a 10 mL vial with 5 mL of PBS preheated at 37 °C. To note, for every time point, each precursor formulation was injected into a separate vial. During the whole testing period samples were stored in an orbital shaker-incubator ES-20 (SIA Biosan, Riga, Latvia) set at 50 rpm and 37 °C. At each time point, PBS on the surface of each in situ formed gel was cautiously wiped off. Subsequently, in situ formed gels were observed visually for homogeneity, consistency, and appearance (Ki et al., ; Mei et al., ). In addition, phase transition analysis of in situ formed gels at 37 °C using a polarized light microscope was performed at each time point. Water uptake evaluation To explore the swelling behavior of in situ formed gels, their water uptake was monitored by gravimetric analysis according to a standard protocol (Mei et al., ) at predetermined time points (1, 6, 12, 24, 36, 48 and 72 hours, and 7, 10 and 14 days). In keeping with this, the time point at which equilibrium with water was reached and the water maximum absorption, denoted as water capacity, was also determined. 0.5 g of each precursor formulation was injected with a 25-gauge needle into a 10 mL vial with 5 mL of PBS preheated at 37 °C. During the whole testing period samples were stored in an orbital shaker-incubator ES-20 (SIA Biosan, Riga, Latvia) set at 50 rpm and 37 °C. Experiments were performed in triplicate. At each time point, the respective masses were determined and the percentage of water uptake was calculated from , where M vg represents mass of a vial together with in situ formed gel, M v represents mass of a vial alone, M vpp represents mass of a vial together with PBS and precursor formulation, and M vp represents the mass of a vial together with PBS. (1) W % = Mvg − Mv Mvpp − Mvp − 1 × 100 % Differential scanning calorimetry Differential scanning calorimetry (DSC) measurements were carried out for individual compounds (i.e. ethanol, lecithin, glycerol monooleate, glycerol monolinoleate, and bidistilled water) and in situ formed gels after reaching equilibrium with the water. DSC was performed to analyze intermolecular interactions and water state within samples. A DSC 1 differential scanning calorimeter (Mettler Toledo, Greifensee, Switzerland) was used. Approximately 10 mg of the sample was accurately weighed into a small aluminum pan and sealed. An empty sealed pan was used as a reference. Nitrogen with a flow rate of 50 mL/min was used as a purge gas. One cooling and one heating scan were recorded during each analysis. Samples were cooled from 20 °C to −80 °C, kept at −80 °C for 5 min, and heated to 140 °C. The cooling and heating rate was 5 K/min. Rheological measurements The rheological behavior of precursor formulations and in situ formed gels after reaching equilibrium with water was characterized using a Physica MCR 301 rheometer equipped with RheoCompass software (Anton Paar GmbH, Graz, Austria). Rotational tests were conducted at 25 ± 0.1 °C for precursor formulations and at 37 ± 0.1 °C for in situ formed gels after reaching equilibrium with water. Experiments were performed in duplicate. Rotational measurements were carried out to determine the viscosity (η), which was calculated according to , where τ is the shear stress and γ̇ is the shear rate. (2) η = τ / γ ˙ Oscillatory tests were employed to define the storage (elastic; G′) and loss (viscous; G″) moduli of in situ formed gels after reaching equilibrium with water at 37 ± 0,1 °C. They were calculated using and , respectively, where τ is the shear stress, γ is the deformation, and δ is the phase shift angle. (3) G ′ = ( τ / γ ) × cos δ (4) G ″ = ( τ / γ ) × sin δ In addition, complex viscosity (η*) was calculated according to , where τ is the shear stress, γ is the deformation, and ω is the angular frequency. (5) η ∗ = τ / ( γ × ω ) Rotational tests were performed using a cone and plate measuring system CP50-2 (cone diameter 49.961 mm, cone angle 2.001°, sample thickness 0.209 mm). The shear rate ranged from 1 s −1 to 100 s −1 . For the oscillatory tests, the stress sweep measurements were carried out at a constant frequency of 10.0 s −1 to determine the linear viscoelastic region. Afterward, the oscillatory shear measurements were performed as a function of frequency (0.1–100 s −1 ) at a small stress (0.1%) chosen within the linear region to provide the least disturbance of the microstructure. In vitro release testing A membraneless model, which enables direct contact between in situ formed gel and release medium, was applied for in vitro release testing. 1 g of Tα1-loaded precursor formulation containing 1.6 mg/g of peptide drug (i.e. dose of reference medicine) (Dominari et al., ) was injected with a 25-gauge needle directly into 15 mL of release medium preheated at 37 °C. Considering physiological conditions after SC administration and chemical stability of peptide drug Tα1, PBS (pH = 6.8) containing 5% (m/m) of ethanol was selected as the most appropriate release medium for the test completion in 2 weeks at 37 °C. At predetermined time points (1, 6, 12, 24, 36, 48 and 72 hours, and 7, 10 and 14 days) 1 mL aliquots of release medium were withdrawn and replaced by an equal volume of fresh preheated receptor medium to keep the volume constant. During the whole testing period samples were stored in an orbital shaker-incubator ES-20 (SIA Biosan, Riga, Latvia) set at 50 rpm and 37 °C. Experiments were performed in quadruplicate. The medium that was taken at each time point was analyzed quantitatively by ultra-high performance liquid chromatography (UHPLC) analysis described below. The cumulative amount of the released peptide drug Tα1 (Q t ) was plotted as a function of time and calculated according to where c t is the peptide drug Tα1 concentration of receptor medium at each sampling time, V rm is the volume of receptor medium, c i is the peptide drug Tα1 concentration at previous sampling times, and V i is the sampling volume. (6) Q t = c t × V rm + ∑ i = 0 t − 1 c i × V i UHPLC analysis An Infinity 1290 ultra-high performance liquid chromatograph (Agilent Technologies, Santa Clara, CA, USA) equipped with a diode array detector with a high-sensitivity Max-Light cartridge cell (60 mm) and an EZChrom acquisition system was used. Chromatographic separation was performed on a reversed-phase Synergi Hydro column 150 × 4.6 mm, 4 µm particle size (Phenomenex, Torrance, CA, USA) at 40 °C. The mobile phase consisted of solvent A: 0.1% H 3 PO 4 and solvent B: acetonitrile with a gradient elution of 12.0% to 16.5% solvent B in 12 min at a flow rate of 1 mL/min. The total run time was 14 min. The injection volume was 10 μL, and a detection wavelength of 214 nm was selected. The method was validated in terms of selectivity (no interference at the retention time of the Tα1), linearity ( R 2 = 1.000 in the concentration range between 1 and 100 mg/L), precision (RSD < 5%) and accuracy (10 0 ± 5%). Tα1 was stable in the samples for at least 4 days when 5% ethanol was added to the solution. Circular dichroism spectroscopy Circular dichroism (CD) measurements were performed using a Chirascan CD spectrometer equipped with a Peltier temperature controller (Applied Photophysics Ltd, London, United Kingdom). CD spectra were recorded in a 1-mm quartz cell (Hellma GmbH & Co, Müllheim, Germany) at 37 °C using 1-nm step, 1-nm bandwidth and 1-s sampling. The secondary structure of the peptide drug Tα1 was analyzed by scan measurement in the range from 200 nm to 260 nm. The results are the average of three scans. Data and statistical analysis The data and statistical analysis were performed with GraphPad Prism 10.2.0. All results, unless stated otherwise, were expressed as mean ± standard deviation (SD).
Ethanol (96% v/v) was provided by Pharmachem, Ljubljana, Slovenia. Lipoid ® S-100, soybean lecithin with not less than 94% (w/w) phosphatidylcholine content, was supplied from Lipoid GmbH, Ludwigshafen, Germany. According to the manufacturer’s specification the fatty acids of the two acyl groups of phosphatidylcholine are palmitic (15%), stearic (3%), oleic and isomers (12%), linoleic (62%), and α-linolenic (5%). Glycerol monooleate, type 40 (Peceol), and glycerol monolinoleate (Maisine ® CC) were obtained as a gift sample from Gattefossé SAS, Saint-Priest, France. According to the manufacturer’s specification, the first consists of monoglycerides (32–52%), diglycerides (30–50%) and triglycerides (5–20%) of mainly oleic (C 18:1 ) acid and the latter consist of monoglycerides (32–52%), diglycerides (40–55%) and triglycerides (5–20%) of mainly linoleic (C 18:2 ) and oleic (C 18:1 ) acids. Tα1, a 28 amino acid sequence peptide, was purchased from Pure Peptides UK, Epsom, United Kingdom. Bidistilled water and phosphate buffered solution (PBS), respectively, were used throughout the experiments. All other chemicals and reagents were of analytical grade. For polarized light microscope examination, gelation time measurements, gelation test, and water uptake evaluation PBS with pH = 7.4 was used to mimic the subcutaneous environment at the injection site. As for the in vitro release testing, PBS with pH = 6.8 containing 5% (m/m) of ethanol was used as the release medium for the stability improvement of the peptide drug Tα1.
In order to determine the concentration ranges of the LCCs formation, four pseudoternary phase diagrams were constructed by a water titration method. Each diagram depicted three phases, with certain phases consisting of different combinations of ingredients. The first phase was composed of either a hydrotropic substance, i.e. ethanol, or a mixture of a hydrotropic substance and an amphiphile, i.e. ethanol and lecithin with a mass ratio of 1/1. The next phase was designated as the lipid phase consisting of glycerol monooleate or glycerol monolinoleate. Finally, the third vertex of the diagram was represented by the hydrophilic phase. Nine dilution lines were constructed for each diagram, and the starting point of each line denoted a precursor formulation. All precursor formulations were thus composed of ethanol or ethanol-lecithin mixture and glycerol monooleate or glycerol monolinoleate with mass ratios ranging from 90/10 to 10/90%. For the titration process, each precursor formulation was slowly titrated with aliquots of the hydrophilic phase and stirred at room temperature for a sufficient time to obtain equilibrium. Subsequently, samples were checked for homogeneity, consistency, and appearance. Homogeneous, highly viscous, and opaque samples were characterized as LCCs. Next, samples visually identified as LCCs were further checked under a polarized light microscope at 25 °C and 37 °C for the presence of liquid crystalline mesophases. Based on the obtained results, eight precursor formulations were then selected for further characterization. The composition of these selected studied precursor formulations is detailed in , while the preparation procedure is outlined in the following section.
Unloaded precursor formulations were prepared by mixing appropriate amounts of ethanol or ethanol-lecithin mixture and glycerol monooleate or glycerol monolinoleate for a sufficient time to form a homogeneous system. In the case of Tα1-loaded precursor formulations containing 1.6 mg/g of peptide drug (i.e. dose of reference medicine) (Dominari et al., ), Tα1 was first dissolved in ethanol or ethanol-lecithin mixture. The resulting solution was then mixed with appropriate amount of glycerol monooleate or glycerol monolinoleate for a sufficient time until a homogeneous system was obtained. Both unloaded and Tα1-loaded precursor formulations were prepared at room temperature.
Polarized light microscopy (PLM) was used for phase transition analysis of the samples from the pseudoternary phase diagram construction as well as the gelation test. In the former, samples that were identified as LCCs based on macroscopic examination of mixtures formed along dilution lines of the pseudoternary phase diagrams were examined at 25 °C and 37 °C to assess phase transition and presence of liquid crystalline mesophases, respectively. In the latter, phase transition of precursor formulations upon contact with PBS at predetermined time points (1, 6, 12, 24, 36, 48 and 72 hours, and 7, 10 and 14 days) was examined at 37 °C. PLM was performed using a CX31-P Upright Microscope (Olympus, Tokyo, Japan). The magnification was 40×.
Gelation time is the time required for a precursor formulation to convert into an in situ formed gel upon contact with an excess aqueous medium. Within the scope of gelation time measurements, 0.5 mL of each precursor formulation was injected with a 25-gauge needle into 5 mL of PBS preheated at 37 °C. The time upon contact of a liquid transparent precursor formulation with the aqueous medium until complete transformation into an opaque in situ formed gel was recorded as the gelation time (Mei et al., ). For each precursor formulation, the measurement was performed in triplicate.
The ability to form and maintain an in situ formed gel of a precursor formulation upon contact with excess aqueous medium for a prolonged period of time was evaluated based on the macroscopic appearance of in situ formed gels at predetermined time points (1, 6, 12, 24, 36, 48 and 72 hours, and 7, 10 and 14 days). In addition, for better visualization, in situ formed gels in vials immediately after injection were also photographed. 0.5 mL of each precursor formulation was injected with a 25-gauge needle into a 10 mL vial with 5 mL of PBS preheated at 37 °C. To note, for every time point, each precursor formulation was injected into a separate vial. During the whole testing period samples were stored in an orbital shaker-incubator ES-20 (SIA Biosan, Riga, Latvia) set at 50 rpm and 37 °C. At each time point, PBS on the surface of each in situ formed gel was cautiously wiped off. Subsequently, in situ formed gels were observed visually for homogeneity, consistency, and appearance (Ki et al., ; Mei et al., ). In addition, phase transition analysis of in situ formed gels at 37 °C using a polarized light microscope was performed at each time point.
To explore the swelling behavior of in situ formed gels, their water uptake was monitored by gravimetric analysis according to a standard protocol (Mei et al., ) at predetermined time points (1, 6, 12, 24, 36, 48 and 72 hours, and 7, 10 and 14 days). In keeping with this, the time point at which equilibrium with water was reached and the water maximum absorption, denoted as water capacity, was also determined. 0.5 g of each precursor formulation was injected with a 25-gauge needle into a 10 mL vial with 5 mL of PBS preheated at 37 °C. During the whole testing period samples were stored in an orbital shaker-incubator ES-20 (SIA Biosan, Riga, Latvia) set at 50 rpm and 37 °C. Experiments were performed in triplicate. At each time point, the respective masses were determined and the percentage of water uptake was calculated from , where M vg represents mass of a vial together with in situ formed gel, M v represents mass of a vial alone, M vpp represents mass of a vial together with PBS and precursor formulation, and M vp represents the mass of a vial together with PBS. (1) W % = Mvg − Mv Mvpp − Mvp − 1 × 100 %
Differential scanning calorimetry (DSC) measurements were carried out for individual compounds (i.e. ethanol, lecithin, glycerol monooleate, glycerol monolinoleate, and bidistilled water) and in situ formed gels after reaching equilibrium with the water. DSC was performed to analyze intermolecular interactions and water state within samples. A DSC 1 differential scanning calorimeter (Mettler Toledo, Greifensee, Switzerland) was used. Approximately 10 mg of the sample was accurately weighed into a small aluminum pan and sealed. An empty sealed pan was used as a reference. Nitrogen with a flow rate of 50 mL/min was used as a purge gas. One cooling and one heating scan were recorded during each analysis. Samples were cooled from 20 °C to −80 °C, kept at −80 °C for 5 min, and heated to 140 °C. The cooling and heating rate was 5 K/min.
The rheological behavior of precursor formulations and in situ formed gels after reaching equilibrium with water was characterized using a Physica MCR 301 rheometer equipped with RheoCompass software (Anton Paar GmbH, Graz, Austria). Rotational tests were conducted at 25 ± 0.1 °C for precursor formulations and at 37 ± 0.1 °C for in situ formed gels after reaching equilibrium with water. Experiments were performed in duplicate. Rotational measurements were carried out to determine the viscosity (η), which was calculated according to , where τ is the shear stress and γ̇ is the shear rate. (2) η = τ / γ ˙ Oscillatory tests were employed to define the storage (elastic; G′) and loss (viscous; G″) moduli of in situ formed gels after reaching equilibrium with water at 37 ± 0,1 °C. They were calculated using and , respectively, where τ is the shear stress, γ is the deformation, and δ is the phase shift angle. (3) G ′ = ( τ / γ ) × cos δ (4) G ″ = ( τ / γ ) × sin δ In addition, complex viscosity (η*) was calculated according to , where τ is the shear stress, γ is the deformation, and ω is the angular frequency. (5) η ∗ = τ / ( γ × ω ) Rotational tests were performed using a cone and plate measuring system CP50-2 (cone diameter 49.961 mm, cone angle 2.001°, sample thickness 0.209 mm). The shear rate ranged from 1 s −1 to 100 s −1 . For the oscillatory tests, the stress sweep measurements were carried out at a constant frequency of 10.0 s −1 to determine the linear viscoelastic region. Afterward, the oscillatory shear measurements were performed as a function of frequency (0.1–100 s −1 ) at a small stress (0.1%) chosen within the linear region to provide the least disturbance of the microstructure.
A membraneless model, which enables direct contact between in situ formed gel and release medium, was applied for in vitro release testing. 1 g of Tα1-loaded precursor formulation containing 1.6 mg/g of peptide drug (i.e. dose of reference medicine) (Dominari et al., ) was injected with a 25-gauge needle directly into 15 mL of release medium preheated at 37 °C. Considering physiological conditions after SC administration and chemical stability of peptide drug Tα1, PBS (pH = 6.8) containing 5% (m/m) of ethanol was selected as the most appropriate release medium for the test completion in 2 weeks at 37 °C. At predetermined time points (1, 6, 12, 24, 36, 48 and 72 hours, and 7, 10 and 14 days) 1 mL aliquots of release medium were withdrawn and replaced by an equal volume of fresh preheated receptor medium to keep the volume constant. During the whole testing period samples were stored in an orbital shaker-incubator ES-20 (SIA Biosan, Riga, Latvia) set at 50 rpm and 37 °C. Experiments were performed in quadruplicate. The medium that was taken at each time point was analyzed quantitatively by ultra-high performance liquid chromatography (UHPLC) analysis described below. The cumulative amount of the released peptide drug Tα1 (Q t ) was plotted as a function of time and calculated according to where c t is the peptide drug Tα1 concentration of receptor medium at each sampling time, V rm is the volume of receptor medium, c i is the peptide drug Tα1 concentration at previous sampling times, and V i is the sampling volume. (6) Q t = c t × V rm + ∑ i = 0 t − 1 c i × V i
An Infinity 1290 ultra-high performance liquid chromatograph (Agilent Technologies, Santa Clara, CA, USA) equipped with a diode array detector with a high-sensitivity Max-Light cartridge cell (60 mm) and an EZChrom acquisition system was used. Chromatographic separation was performed on a reversed-phase Synergi Hydro column 150 × 4.6 mm, 4 µm particle size (Phenomenex, Torrance, CA, USA) at 40 °C. The mobile phase consisted of solvent A: 0.1% H 3 PO 4 and solvent B: acetonitrile with a gradient elution of 12.0% to 16.5% solvent B in 12 min at a flow rate of 1 mL/min. The total run time was 14 min. The injection volume was 10 μL, and a detection wavelength of 214 nm was selected. The method was validated in terms of selectivity (no interference at the retention time of the Tα1), linearity ( R 2 = 1.000 in the concentration range between 1 and 100 mg/L), precision (RSD < 5%) and accuracy (10 0 ± 5%). Tα1 was stable in the samples for at least 4 days when 5% ethanol was added to the solution.
Circular dichroism (CD) measurements were performed using a Chirascan CD spectrometer equipped with a Peltier temperature controller (Applied Photophysics Ltd, London, United Kingdom). CD spectra were recorded in a 1-mm quartz cell (Hellma GmbH & Co, Müllheim, Germany) at 37 °C using 1-nm step, 1-nm bandwidth and 1-s sampling. The secondary structure of the peptide drug Tα1 was analyzed by scan measurement in the range from 200 nm to 260 nm. The results are the average of three scans.
The data and statistical analysis were performed with GraphPad Prism 10.2.0. All results, unless stated otherwise, were expressed as mean ± standard deviation (SD).
Pseudoternary phase diagram construction When developing a novel drug delivery system based on LCCs, phase behavior of systems with precisely defined components can be investigated using ternary or pseudoternary phase diagrams. In this study, four pseudoternary phase diagrams were constructed for the systems containing ethanol/glycerol monooleate/hydrophilic phase , ethanol/glycerol monolinoleate/hydrophilic phase , ethanol/lecithin/glycerol monooleate/hydrophilic phase , and ethanol/lecithin/glycerol monolinoleate/hydrophilic phase . Ethanol had a role of a hydrotropic substance for viscosity reduction (Ferreira et al., ) as well as peptide drug Tα1 stabilization. Lecithin was chosen as a biocompatible amphiphile capable of formation and stabilization of various LCCs mesophases depending on the remaining amphiphiles in the mix (Gosenca et al., ). Glycerol monooleate and glycerol monolinoleate were selected as hexagonal and/or cubic mesophases-forming amphiphilic lipids. They possess excellent biocompatible and biodegradable characteristics due to the presence of ester bonds in their structure, which undergo lipolytic degradation by endogenous lipases after SC injection (Zhang et al., ). Hydrophilic phase represented aqueous environment of the SC tissue. Macroscopic analysis of the systems formed across the constructed pseudoternary phase diagrams was initially performed. Different proportions and type of components resulted in formation of distinct systems. Transparent or semi-transparent and viscous gel-like systems were identified as LCCs. As our focus was on the formation of liquid crystalline mesophases, we did not explore in detail other regions of the diagrams where nonhomogeneous systems, coarse emulsions, or microemulsions were present. Pseudoternary phase diagram study showed that absence or presence of lecithin had a key influence on the formation of LCCs, while the type of lipid did not show any significant effect. Namely, in the diagrams from and employing only ethanol, lipid, and hydrophilic phase, a considerably smaller region of LCCs formation was observed compared to the diagrams from and utilizing ethanol/lecithin mixture, lipid, and hydrophilic phase. More specifically, in the case of the diagram from , adding hydrophilic phase at a level of 10–30% to precursor formulations containing 10–20% of ethanol resulted in the formation of LCCs. In parallel, a similarly small region of LCCs was distinguished in the diagram from , where addition of hydrophilic phase ranging from 50 to 70% to precursor formulations with 10–20% of ethanol led to LCCs formation. On the other hand, in the case of the diagrams from and , precursor formulations containing 50–80% of the ethanol/lecithin mixture formed a large region of LCCs after hydrophilic phase was added in the range of 10–90%. When comparing the systems across all diagrams from , it is important to note that the LCCs formed in the diagrams from and exhibited favorable macroscopic characteristics such as homogeneity and high viscosity. Phase transition analysis within pseudoternary phase diagram construction Systems identified as potential LCCs based on macroscopic analysis, utilizing pseudoternary phase diagrams, underwent subsequent microscopic analysis using polarized light microscope at room (25 °C) and body (37 °C) temperature. The acquired observations are schematically presented in , while the representative photomicrographs are depicted in . PLM is one of the most commonly used methods for investigating LCCs mesophases, offering valuable insight into their molecular organization and phase transitions. When subjected to polarized light, anisotropic systems such as lamellar and hexagonal LCCs exhibit characteristic birefringent pattern. Maltese crosses together with oily streaks denote the presence of lamellar mesophases, while fan-like textures indicate formation of hexagonal mesophases. A dark background, with no birefringence, suggests the presence of cubic mesophases, known for their isotropic liquid behavior (Manaia et al., ; Zhang et al., ). First, evaluation of the systems was performed at a temperature of 25 °C. In the case of the systems from the first and the second diagram, the black view under polarized microscope pointed to the presence of cubic LCCs being in good agreement with the investigation by Mei et al. . In contrast, the phase behavior of the systems depicted in the third and fourth diagram exhibited distinctive characteristics of LCCs, confirming that lecithin has a key function in formation of these LCCs. Namely, numerous and pronounced fan-like textures started to emerge in the dark background from precursor formulations containing 50%, 60%, and 70% of the ethanol/lecithin mixture, respectively, independent of the hydrophilic phase content. This observation is imperative as it draws us to two important findings. Firstly, it indicates the presence of hexagonal LCCs and in some areas coexistence with cubic LCCs; nevertheless, hexagonal mesophases were strongly prevailing. Secondly, it implies that absorption of only a small amount of hydrophilic phase was necessary for these precursor formulations to form hexagonal LCCs in situ . Further, mixed phases of hexagonal LCCs together with lamellar LCCs were formed from precursor formulations containing 80% of the ethanol/lecithin mixture, respectively, but only after a larger addition of hydrophilic phase, i.e. 40–90%. It should be noted that, for precursor formulations containing 80% of the ethanol/lecithin mixture, fan-like textures were less numerous and pronounced as well as that individual Maltese crosses were also observed. Further, comparable results were obtained at a temperature of 37 °C for all four diagrams. The only exception was that precursor formulations from the third and the fourth diagram containing 80% of the ethanol/lecithin mixture already showed fan-like textures along with Maltese crosses when the hydrophilic phase content reached 10%. This indicates that higher temperature induces, to some extent, the formation of hexagonal LCCs together with lamellar LCCs, which is desirable for SC administration. Based on all the findings obtained from the pseudoternary phase construction (i.e. macroscopic appearance) and the corresponding phase behavior analysis (i.e. microstructure), eight precursor formulations, capable of in situ phase transition to hexagonal LCCs upon addition of water, were selected for further characterization studies. The composition of the selected studied precursor formulations is reported in and highlighted in . Gelation time measurements Gelation time denotes the time required for a precursor formulation to transform into an in situ formed gel when exposed to excess aqueous medium (Mei et al., ). A rapid sol-gel transition is desired to minimize the possibility of initial burst release as in situ formation of gel retards the release of the incorporated drug. At room temperature, all precursor formulations were clear with good fluidity, but upon contact with aqueous medium heated to 37 °C, they quickly lost their flowability. Among all precursor formulations, a very short gelation time of a few seconds was measured for (E/L)Go50 (2.8 seconds), (E/L)Gl50 (2.5 seconds), (E/L)Go60 (4.1 seconds), and (E/L)Gl60 (3.6 seconds). Further, a slightly longer gelation time was measured for (E/L)Go70 (14.1 seconds) and (E/L)Gl70 (13.3 seconds). The longest sol-gel transition time was determined for (E/L)Go80 (70.0 seconds) and (E/L)Gl80 (45.4 seconds). These results indicate that all precursor formulations would spontaneously transform into an in situ gel at the site of administration upon exposure to the physiological fluid. However, it can be observed that there are certain differences among precursor formulations which are most likely related to their composition. Namely, increasing glycerol monooleate and glycerol monolinoleate content, respectively, leads to a decrease in gelation time, which is also consistent with the literature data (Mei et al., ). The phenomenon can be attributed to glycerol monooleate and glycerol monolinoleate being amphiphilic lipids capable of forming hexagonal LCCs. Consequently, they have the ability to rapidly induce the formation of these highly viscous mesophases. Gelation test The ability of a precursor formulation to form and maintain an in situ formed gel upon injection into excess aqueous medium heated to 37 °C was evaluated within gelation test. shows the visual appearance of in situ formed gels at selected predetermined time points, namely, immediately after injection, the initial and final time assessment points (1 hour and 14 days), the time points when equilibrium with water was established (24 and 72 hours), and the time point showing the most prominent morphological changes of in situ formed gels were observed (7 days). In addition, for better visualization, in situ formed gels in vials immediately after injection are also illustrated. Supplementary Figure S1 provides visual representation of in situ formed gels at other predetermined time points, namely 6, 12, 36, 48 hours, and 10 days. The intensity of color of all in situ formed gels was the highest at the first time point, i.e. 1 hour, due to the thorough uptake of the aqueous medium, which began immediately upon contact with it. The most compact forms with noticeably low quantity of uptook aqueous medium were formed by (E/L)Go50 and (E/L)Gl50, which, unlike the milky yellow (E/L)Go80 and (E/L)Gl80, were bright yellow. The least firm and visibly the biggest volumes of in situ formed gels were observed for (E/L)Go80 and (E/L)Gl80, as they uptook high quantity of aqueous medium. (E/L)Go60, (E/L)Gl60, (E/L)Go70, and (E/L)Gl70 typically represented an ‘intermediate stage”. They combined both clear and cloudy areas in color, and their consistency was softer than that of (E/L)Go50 and (E/L)Gl50 and firmer than that of (E/L)Go80 and (E/L)Gl80. With the exception of less intense color and slower process of swelling, after 6, 12, 24, 36, 48, and 72 hours, no significant changes in macroscopic appearance of all in situ formed gels were observed. However, after 7 days notable changes in color and consistency were detected for all in situ formed gels as they started to liquefy and decrease in their size. Over the following days, the process of liquification and erosion was slowly progressing. Interestingly, at the last time point, i.e. 14 days, remaining in situ formed gels of (E/L)Go80 and (E/L)Gl80 settled at the bottom of the vials, while remainings of the other systems were still floating. When comparing all precursor formulations, due to the yellow color of glycerol monooleate and glycerol monolinoleate, respectively, gels containing more lipid phase were more yellow. But what seems to be of major importance, we found that in situ formed gels with a higher lipid content were more compact and eroded more slowly, while in situ formed gels containing a higher amount of the ethanol/lecithin mixture were softer and degraded faster. Phase transition analysis within gelation test As the microstructure of in situ formed gel is a crucial factor influencing the drug release kinetics, the phase transition of precursor formulations upon contact with aqueous medium was monitored using PLM at 37 °C. The analysis was performed at the same time points as the gelation test was carried out. Photomicrographs obtained at the selected predetermine time points, i.e. 1, 24, 72 hours, and 7 and 14 days (see chapter Gelation test) are shown in . Supplementary Figure S2 provides photomicrographs taken at other predetermined time points, namely 6, 12, 36, 48 hours, and 10 days. The phase changes at post-hydration time of precursor formulations with excess aqueous medium revealed dynamic phase transitions, which were caused by rearrangement of molecules within precursor formulations. These microstructural changes can be understood in terms of aqueous self-assembly of amphiphile mixtures explained by their critical packing parameter (CPP). Namely, the self-assembly of single amphiphiles in aqueous medium is driven by a balance between the hydrophobic interactions of the tails and the geometrical packing constraints of the polar head groups. These factors are expressed as CPP = v/al , where v is the volume of the hydrophobic tail, a is the polar head group area, and l is the hydrophobic tail length of the amphiphilic molecule. As a guideline, amphiphiles with a CPP ∼1 usually self-assemble into lamellar LCCs (Engström & Engström, ), a CPP of ∼1.3 is characteristic for bicontinuous cubic mesophases, while amphiphiles with a CPP ∼1.7 form inverted hexagonal mesophases (Larsson, ). In regard to our results, clearly visible and numerous fan-like textures emerging from dark background, were observed for (E/L)Go50, (E/L)Gl50, (E/L)Go60, (E/L)Gl60, (E/L)Go70, and (E/L)Gl70 at the first time point of the assessment and they persisted until the conclusion of the analysis. It appears that hexagonal LCCs were quickly formed from these precursor formulations and that their microstructure was preserved until the final time point of the analysis. These findings can be attributed to the high content of glycerol monooleate and glycerol monolinoleate, respectively, in these precursor formulations. Namely, upon contact with aqueous medium, the polar head groups of the amphiphilic lipid from precursor formulations begin to move more freely. Consequently, these movements induce disorder in the hydrophobic chain of the amphiphilic lipid, leading to an increase of volume of the hydrophobic tail – v . However, the cross-sectional area of the polar head groups stays constant due to the strong hydrogen bonding. Therefore, CPP value increases as v increases and the polar head group area – a and the hydrophobic tail length of the amphiphilic molecule – l remain constant, thereby facilitating phase transition to hexagonal mesophases (Borgheti-Cardoso et al., ; Ferreira, ). When looking at (E/L)Go80 and (E/L)Gl80, hexagonal LCCs were also mainly present at all time points of the assessment. However, it should be noted that here fan-like structures were less pronounced. In addition, Maltese crosses indicative of lamellar LCCs were observed at the initial time point for (E/L)Go80 and (E/L)Gl80, with their presence slowly increasing throughout the analysis. Given that these precursor formulations contained a high content of lecithin/ethanol mixture, its effect was reflected in the resulting mesophases. Namely, a CPP value for lecithin, specifically for phosphatidylcholine as its main component, ranges from 0.5 to 1, meeting the requirement for bilayer formation of lamellar mesophases. Furthermore, ethanol molecules intercalated within phospholipid bilayers of lecithin additionally contributed to the lipid bilayer fluidity (Mkam Tsengam et al., ). As a result, in the case of the abovementioned precursor formulations, the self-assembly of lamellar mesophases was also observed along with formation of hexagonal mesophases. Water uptake evaluation Swelling behavior of in situ formed gels is another important characteristic influencing the drug release behavior. Therefore, their water uptake kinetics was evaluated at predetermined time points at temperature of 37 °C. The water uptake was monitored over time until equilibrium with water was reached. The determined value represented the maximum water absorption, referred to as water uptake capacity, shown in . The obtained results showed that the water uptake of all in situ formed gels increased rapidly in the first hour upon contact with excess aqueous medium and then gradually leveled off. The equilibrium water absorption for (E/L)Go50, (E/L)Gl50, (E/L)Go60, (E/L)Gl60, and (E/L)Gl70 was determined at 24 hours, while for (E/L)Go70, (E/L)Go80, and (E/L)Gl80 the equilibrium with water was reached after 72 hours. When considering the water capacities of in situ formed gels, the obtained data indicated that the chosen lipid played a pivotal role in their swelling behavior. The lowest water capacity was determined for (E/L)Go50 (5.4%) and (E/L)Gl50 (2.9%) consisting of the highest proportion of glycerol monooleate and glycerol monolinoleate, respectively. Slightly higher water capacities were observed for (E/L)Go60 (15.2%), (E/L)Gl60 (12.1%), and (E/L)Gl70 (13.4%). These results are in good agreement with phase transition analysis within gelation test, where it has been shown that fan-like textures, indicating hexagonal LCCs, are continually present in these in situ formed gels. According to the literature, water channels within hexagonal mesophases are closed to the external environment, hence water diffusion is retarded (Chavda et al., ). Further, moderately higher water capacity was determined for (E/L)Go70 (25.2%), while (E/L)Go80 (82.2%) and (E/L)Gl80 (74.3%) stood out with the highest water capacity. Again, these results correlate well with phase transition analysis within gelation test, revealing that in addition to hexagonal LCCs, lamellar mesophases are also present in (E/L)Go80 and (E/L)Gl80. It is known that lamellar LCCs usually absorb more water (Alfutimie et al., ). When looking at all the results together, another interesting finding can be observed. Namely, in situ formed gels containing glycerol monooleate appeared to absorb a higher amount of water compared to those containing glycerol monolinoleate. This overall trend is important to note, as it seems to be also reflected in the results of the in vitro release testing presented later in the study. Differential scanning calorimetry DSC analysis was performed to elucidate intermolecular interactions and water state within in situ formed gels after reaching equilibrium with water. Evaluation was performed based on the crystallization (T c ) and melting (T m ) temperatures visible in the crystallization and the melting curves as well as the enthalpies of crystallization (ΔH c ) and melting (ΔH m ) derived by integrating the areas under the corresponding peaks in the DSC thermograms. Initially, assessment was carried out for individual compounds, i.e. ethanol, lecithin, glycerol monooleate, and glycerol monolinoleate, and bidistilled water. On the crystallization curves of individual components , no thermal events were observed for ethanol or lecithin. However, for glycerol monooleate, a minor broad exothermic peak at T c1 = 15.7 °C (ΔH c1 = 0.35 J/g) plus a noticeable exothermic peak at T c2 = −0.8 °C (ΔH c2 = 133.0 J/g) were detected. For glycerol monolinoleate, a small exothermic triple peak appeared (T c1 = −16.3 °C, ΔH c1 = 1.2 J/g, T c2 = −22.9 °C, ΔH c2 = 6.7 J/g, T c3 = −29.2 °C, ΔH c3 = 5.6 J/g). The observed peaks of both lipids can be attributed to the rearrangement and/or crystallization of glycerol monooleate and glycerol monolinoleate molecules, respectively (Chauhan et al., ). Linolenic acid (C 18:2 ) has one more double bond than oleic acid (C 18:1 ), which contributes to its higher degree of unsaturation and greater mobility. This increased mobility is evidenced by more crystallization peaks, as observed in the DSC thermograms (Nyame Mendendy Boussambe et al., ). Regarding bidistilled water, at T c1 = −20.1 °C (ΔH c1 = 239.6 J/g) a sharp exothermic peak appeared, coinciding with the crystallization of supercooled water. Next, on the melting curves of individual components , again no thermal events were observed for lecithin. Nevertheless, in case of ethanol an endothermic peak was detected at T m1 = 76.2 °C (ΔH m1 = −775.5 J/g) corresponding to its evaporation. In the case of lipid components, their melting was characterized by small double endothermic peaks. More specifically, at T m1 = 3.3 °C (ΔH m1 = −15.4 J/g) and T m2 = 13.7 °C (ΔH m2 = −12.6 J/g) for glycerol monooleate, and at T m1 = −15.6 °C (ΔH m1 = −13.5 J/g) and T m2 = 4.9 °C (ΔH m2 = −6.9 J/g) for glycerol monolinoleate. The thermal events of bidistilled water were observed at T m1 = −0.3 °C (ΔH m1 = −278.4 J/g), attributed to ice melting, followed by a broad endothermic peak at T m2 = 97.9 °C (ΔH m2 = −1709.2 J/g), ascribed to its evaporation. In the next step, evaluation of in situ formed gels after reaching equilibrium with water was performed with special attention given to the water state within them. Water, located near the polar heads of amphiphilic molecules in LCCs, exhibits different thermal properties due to interactions that reduce its degrees of freedom when compared to water that is more distant from the polar heads. Consequently, water molecules forming stronger interactions with amphiphilic molecules solidify at lower temperatures compared to water with weaker interactions, resulting in a lower enthalpy of freezing, sometimes even below the detection limit. Based on this water is classified as non-freezable, freezable interlamellar bound water, and freezable bulk water (Ezrahi et al., ). displays the crystallization curves and shows the crystallization enthalpies of in situ formed gels after equilibrium with water was reached. As regards the crystallization curves of (E/L)Go50 and (E/L)Gl50, two wide exothermic peaks in the range of T c = −18.5 °C to T c = −43.1 °C with small areas under the curve appeared. It seems plausible that herein most of the water was located around the polar headgroups of ethanol and lecithin, with almost no free water in the in situ formed gel. Further, in the case of (E/L)Go60, (E/L)Gl60, (E/L)Go70, and (E/L)Gl70, we detected two exothermic peaks between in the range of T c = −16.2 °C to T c = −44.7 °C, representing the crystallization of free water and bound water from the second hydration layer. In these in situ formed gels, water was present around the polar headgroups of ethanol and lecithin in addition to free water within the water channels, indicating that the polar headgroups were already saturated with water molecules. Further, it should be emphasized that certain differences were observed among (E/L)Go70 and the other listed in situ formed gels. Namely, area under the first exothermic peak at approximately −20 °C, attributed to free water within the system, was 3- to 5-times larger in the case of (E/L)Go70, which corresponds well with the results of water uptake evaluation and is also shown in the in vitro release testing presented later in the study. In regard to the crystallization curves of (E/L)Go80 and (E/L)Gl80, only one exothermic peak appeared at −22.6 °C and 22.8 °C, respectively, which in terms of the size of the area under the curve and shape, most closely resembles the reference peak of bidistilled water. It can be postulated that the polar headgroups of ethanol and lecithin are already fully saturated with water molecules of the first and the second hydration layer and that a significant amount of absorbed water is in the form of free water within the water channels. It seems plausible that this free water mostly belongs to lamellar mesophases, which were detected in addition to hexagonal LCCs by PLM analysis of these in situ formed gels. To note, all of these findings are in good agreement with results of gelation test and water uptake evaluation, which confirm the lowest water absorption of (E/L)Go50 and (E/L)Gl50, contrary to (E/L)Go80 and (E/L)Gl80 with the highest water uptake. shows the melting curves, and presents the melting enthalpies of in situ formed gels after equilibrium with water was reached. Melting of ice formed within the cooling cycle of the analysis was noted at approximately 0 °C. In addition, evaporation of ethanol was detected at approximately 78 °C, while water evaporated at approximately 100 °C. The obtained melting curves confirm trends observed from the crystallization curves, where the positions and areas under the curves were directly proportional to the content of absorbed water within in situ formed gels. Rheological measurements Microstructure evaluation of precursor formulations as well as in situ formed gels after reaching equilibrium with water was further upgraded by rheological tests, which provided additional insights into their flow behavior after applied stress as well as viscoelastic characteristics. These measurements contributed to a comprehensive understanding of the structural changes going along with sol–gel transition in addition to structural analyses performed using PLM and DSC. Firstly, rotational measurements were performed to elucidate the flow behavior of a system subjected to applied stress, offering an insight into microstructural alterations upon SC administration. The viscosity curves of all precursor formulations obtained at 25 °C demonstrated a constant viscosity regardless of increasing shear rate. This finding confirm that all precursor formulations exhibited Newtonian fluid behavior, a desirable feature for injectables designed for SC administration. When comparing the viscosities of precursor formulations at the lowest shear rate, a positive correlation between lipid content and viscosity was revealed. However, it is important to note that viscosities of all precursor formulations ranged from 17.0 cP to 36.9 cP, being far below 50 cP, therefore confirming their suitability for SC injection (Miller et al., ). Precursor formulations with glycerol monolinoleate as a lipid phase generally exhibited lower viscosities. Further, rotational tests were also performed for in situ formed gels after reaching equilibrium with water at a temperature of 37 °C. Their viscosities decreased with increasing shear rate until they reached a constant value at high shear rates, hence all in situ formed gels can be classified as non-Newtonian pseudoplastic systems, which is also consistent with our expectations. Upon evaluating the viscosities of in situ formed gels, the viscosity values of in situ formed gels were prominently higher when compared to precursor formulations. In addition, if negligible variations in viscosity values at the lowest shear rate (1 s −1 ) for precursor formulations were detected, notable variations were observed for in situ formed gels which can be explained with their spontaneous formation. Therefore, viscosities for in situ formed gels are presented at 2 s −1 with similar trend to that observed for their respective precursor formulations. More specifically, viscosity values ranged from 698.0 Pa·s to 111.9 Pa·s. In situ formed gels containing glycerol monolinoleate as a lipid phase in general exhibited lower viscosities. All of these results correlate well with gelation test and the corresponding PLM analysis. Further, oscillatory shear frequency sweep measurements were performed for in situ formed gels after reaching equilibrium with water at 37 °C , as they provide important information regarding the viscoelastic properties of a system corresponding to its network structure. Therefore, rheological parameters, including storage (G′) and loss (G″) moduli, as well as complex viscosity (η*), across various angular frequencies were recorded. The G′ modulus reflects elastic properties of a system, with high values demonstrating a system with strong elasticity and structure, while high G’’ values suggest a predominantly viscous, liquid-like material. Depending on the dominant modulus, a system can be classified as either elastic or viscous. The G’ modulus was generally higher than the G″ modulus with increasing frequency, whereas complex viscosity was decreasing with increasing frequency in case of all in situ formed gels, indicating predominantly elastic behavior. This is characteristic of gel-like systems and can be attributed to the well-organized microstructure of the LCCs. More specifically, in the case of (E/L)Go50, (E/L)Gl50, (E/L)Go60, (E/L)Gl60, and (E/L)Gl70, the G′ and G″ moduli were strongly enhanced with increasing frequency. This observed rheological pattern is representative of hexagonal LCCs (Xingqi et al., ) and is good agreement with PLM photomicrographs. Furthermore, similar curves were also observed for (E/L)Go70, whereby the G′ and G″ moduli were less enhanced with increasing frequency, indicating a less dense network of hexagonal LCCs. To note, less strong interactions between surfactant molecules and water were also confirmed by DSC measurements for this in situ formed gel. Next, the rheological behavior of (E/L)Go80 and (E/L)Gl80 also correlated well with the results of other assessments. Namely, in the case of these two in situ formed gels, both the G′ and G″ moduli were nearly independent of the angular frequency over the entire range investigated with the large gap between the both curves, suggesting the coexistence of hexagonal mesophases along with lamellar LCCs (Mistry et al., ), as we had also anticipated based on PLM analysis. In vitro release testing Selection of the release medium The newly developed in situ forming liquid crystalline systems were designed for the sustained release of the peptide drug Tα1 with inherent poor stability. Therefore, to ensure an optimal in vitro release testing experiment for the period of 2 weeks, preliminary studies were conducted to assess the influence of the release medium on the Tα1’s stability. In addition, given that the experiment was carried out at 37 °C and that the samples were stored for subsequent UHPLC analysis after sampling, the effect of storage temperature on the Tα1’s stability was also evaluated . Within the assessment of release media, we examined the Tα1’s stability regarding the absence (ultrapure water) or the presence of ions in various buffers (PBS, simulated body fluid), the pH value (6.8 and 7.4) and the proportion of ethanol (5%, 100% (m/m)). Additionally, as part of the temperature stability testing, the Tα1’s stability was evaluated at the following temperatures for all release media: −20 °C (freezer temperature), 8 °C (refrigerator temperature), 25 °C (room temperature), and 37 °C (body temperature). At −20 °C and 8 °C, the Tα1’s stability was adequate within all tested combinations of release media. However, evident differences in stability were observed at elevated temperatures. To note, we found that by adding a small proportion of ethanol improved stability of the peptide drug Tα1 in the release medium was obtained, proving its key influence on Tα1’s stability. Considering the observed finding and the literature data reporting that a slightly acidic pH improves Tα1’s stability (Dai et al., ), PBS (pH = 6.8) containing 5% (m/m) of ethanol was selected as the most appropriate release medium at all tested temperatures over the entire testing period. The potential effect of ethanol on in situ depot formation was investigated by PLM microstructural examination of the in situ formed gels exposed to the release medium containing 5% (m/m) ethanol after equilibrium with water was reached (data not shown). It has been demonstrated that this proportion of ethanol had no effect on depot formation. In vitro release of the Tα1 from in situ formed gels Achieving the sustained release of the peptide drug Tα1 was one of the pivotal aspects we focused on in the development of the in situ forming liquid crystalline systems in this study. Thus , in vitro release testing was performed to evaluate their potential for minimization of the Tα1’s dosing frequency that could greatly improve patient compliance upon clinical translation of the systems. displays the cumulative release of the peptide drug Tα1 in vitro from the in situ formed gels over a period of 2 weeks. All the studied in situ formed gels demonstrated the sustained release profiles; however, noticeable differences were observed among them. (E/L)Go80 and (E/L)Gl80 exhibited the greatest total drug release after 2 weeks with 84.2% and 93.4%, respectively. Further, (E/L)Go70 demonstrated 19.1% of released Tα1 after 2 weeks. It is important to note that this represented 2- to 4-times greater total drug release when compared to other in situ formed gels. Namely, they exhibited comparable amount of released drug after 2 weeks, being 8.4% for (E/L)Gl70, 8.3% for (E/L)Go50, 7.8% for (E/L)Gl50, 5.8% for (E/L)Gl60 and 5.5% (E/L)Go60. The observed differences can be explained by bidirectional relationship among variables influencing the drug release mechanism from LCCs. Specifically, the hydrophilic characteristics of the peptide drug Tα1 (Goldstein et al., ), which determine the affinity for the water channels of LCCs, as well as the composition and the microstructure of the LCCs with the interrelated water uptake capacity. It is known from the literature that the release of hydrophilic drugs from lamellar LCCs, which are in general more highly hydrated mesophases, is more rapid than from hexagonal LCCs with relatively low water absorption. This phenomenon can be attributed to an increase in the water channels available for release of hydrophilic drugs with increasing water content within the system (Borgheti-Cardoso et al., ; Elnaggar et al., ). In the present study, the coexistence of hexagonal mesophases along with lamellar LCCs was confirmed by PLM analysis and oscillatory measurements for (E/L)Go80 and (E/L)Gl80. Consequently, the water uptake capacity of (E/L)Go80 and (E/L)Gl80 was exceptionally high and the release was greater than that of the other gels formed in situ . In keeping with this, their release profiles were also consistent with the explanation provided above. Namely, the other in situ formed gels formed only hexagonal mesophases, resulting in their noticeably sustained release profiles. Among these, (E/L)Go70 demonstrated a moderately greater total drug release, which corresponded with its higher water uptake capacity and the associated larger proportion of free water, as confirmed by DSC measurements as well. In other words, larger amount of free water within water channels of hexagonal mesophases present in (E/L)Go70 enabled moderately greater release of the hydrophilic peptide drug Tα1. However, it is still necessary to take into account that (E/L)Go70 formed only hexagonal mesophases and that water channels within them are closed to the external environment, hence water diffusion is retarded (Chavda et al., ). Other in situ formed gels exhibiting solely hexagonal mesophases showed similar water uptake capacities and similar intermolecular network, as identified by DSC analysis, their amount of the released peptide drug Tα1 was comparable. Further, the peptide drug Tα1’s secondary structure using CD spectroscopy was examined. Considering the literature indicating that Tα1 is an intrinsically disordered peptide at neutral pH and body temperature in water, with various solvents capable of inducing structural changes (Hoch & Volk, ), its structural stability was systematically evaluated in different samples throughout processing. Supplementary Figure S3A shows the dichroic profile of the peptide drug Tα1 in ethanol for incorporation into formulation, indicating β-sheet conformation (Greenfield, ). Further, the CD spectrum of the peptide drug Tα1 in the release medium post-drug release testing, shown in Supplementary Figure S3B , indicates that the peptide adopted a random coil conformation in the aqueous environment, which aligns well with findings from (Grottesi et al., ). In addition, it also correlates with the CD spectra obtained for the dissolved lyophilisate of the peptide drug Tα1 in the release medium and in PBS itself ( Supplementary Figure S3C and S3D ). Taken together, these results confirm that the peptide drug Tα1 adopts and maintains its native conformation, characteristic of aqueous environment, in the release medium after the completion of the in vitro release testing. Notably, the conformational changes in different environments may serve as structural prerequisites for Tα1’s interaction with lymphocyte membranes, potentially representing the initial event in lymphocyte activation during immune response modulation, thereby highlighting the functional relevance (Grottesi et al., ). To conclude, the results of the in vitro release testing demonstrated that adjusting the composition of precursor formulations facilitates the regulation of in situ formed gels’ microstructure, thereby controlling the release profiles of the incorporated peptide drug Tα1. Furthermore, the release profiles obtained over a period of 2 weeks imply the potential of the in situ formed gels innovated in this study to prolong the peptide drug Tα1’s release and notably minimize its dosing frequency. Nevertheless, it is important to note that the % of the released peptide drug Tα1 increases only slightly after initial release observed in the first days of the in vitro release testing. A similar release behavior has also been reported for the peptide drug leuprolide acetate from liquid crystalline hexagonal mesophases (Báez-Santos et al., ). Upon administration, the SC tissue pressure along with flow of the SC interstitial fluid perfusing the in situ formed depots is expected to assist the erosion of the in situ formed gel matrix and enhance the drug release, though (Torres-Terán et al., ).
When developing a novel drug delivery system based on LCCs, phase behavior of systems with precisely defined components can be investigated using ternary or pseudoternary phase diagrams. In this study, four pseudoternary phase diagrams were constructed for the systems containing ethanol/glycerol monooleate/hydrophilic phase , ethanol/glycerol monolinoleate/hydrophilic phase , ethanol/lecithin/glycerol monooleate/hydrophilic phase , and ethanol/lecithin/glycerol monolinoleate/hydrophilic phase . Ethanol had a role of a hydrotropic substance for viscosity reduction (Ferreira et al., ) as well as peptide drug Tα1 stabilization. Lecithin was chosen as a biocompatible amphiphile capable of formation and stabilization of various LCCs mesophases depending on the remaining amphiphiles in the mix (Gosenca et al., ). Glycerol monooleate and glycerol monolinoleate were selected as hexagonal and/or cubic mesophases-forming amphiphilic lipids. They possess excellent biocompatible and biodegradable characteristics due to the presence of ester bonds in their structure, which undergo lipolytic degradation by endogenous lipases after SC injection (Zhang et al., ). Hydrophilic phase represented aqueous environment of the SC tissue. Macroscopic analysis of the systems formed across the constructed pseudoternary phase diagrams was initially performed. Different proportions and type of components resulted in formation of distinct systems. Transparent or semi-transparent and viscous gel-like systems were identified as LCCs. As our focus was on the formation of liquid crystalline mesophases, we did not explore in detail other regions of the diagrams where nonhomogeneous systems, coarse emulsions, or microemulsions were present. Pseudoternary phase diagram study showed that absence or presence of lecithin had a key influence on the formation of LCCs, while the type of lipid did not show any significant effect. Namely, in the diagrams from and employing only ethanol, lipid, and hydrophilic phase, a considerably smaller region of LCCs formation was observed compared to the diagrams from and utilizing ethanol/lecithin mixture, lipid, and hydrophilic phase. More specifically, in the case of the diagram from , adding hydrophilic phase at a level of 10–30% to precursor formulations containing 10–20% of ethanol resulted in the formation of LCCs. In parallel, a similarly small region of LCCs was distinguished in the diagram from , where addition of hydrophilic phase ranging from 50 to 70% to precursor formulations with 10–20% of ethanol led to LCCs formation. On the other hand, in the case of the diagrams from and , precursor formulations containing 50–80% of the ethanol/lecithin mixture formed a large region of LCCs after hydrophilic phase was added in the range of 10–90%. When comparing the systems across all diagrams from , it is important to note that the LCCs formed in the diagrams from and exhibited favorable macroscopic characteristics such as homogeneity and high viscosity.
Systems identified as potential LCCs based on macroscopic analysis, utilizing pseudoternary phase diagrams, underwent subsequent microscopic analysis using polarized light microscope at room (25 °C) and body (37 °C) temperature. The acquired observations are schematically presented in , while the representative photomicrographs are depicted in . PLM is one of the most commonly used methods for investigating LCCs mesophases, offering valuable insight into their molecular organization and phase transitions. When subjected to polarized light, anisotropic systems such as lamellar and hexagonal LCCs exhibit characteristic birefringent pattern. Maltese crosses together with oily streaks denote the presence of lamellar mesophases, while fan-like textures indicate formation of hexagonal mesophases. A dark background, with no birefringence, suggests the presence of cubic mesophases, known for their isotropic liquid behavior (Manaia et al., ; Zhang et al., ). First, evaluation of the systems was performed at a temperature of 25 °C. In the case of the systems from the first and the second diagram, the black view under polarized microscope pointed to the presence of cubic LCCs being in good agreement with the investigation by Mei et al. . In contrast, the phase behavior of the systems depicted in the third and fourth diagram exhibited distinctive characteristics of LCCs, confirming that lecithin has a key function in formation of these LCCs. Namely, numerous and pronounced fan-like textures started to emerge in the dark background from precursor formulations containing 50%, 60%, and 70% of the ethanol/lecithin mixture, respectively, independent of the hydrophilic phase content. This observation is imperative as it draws us to two important findings. Firstly, it indicates the presence of hexagonal LCCs and in some areas coexistence with cubic LCCs; nevertheless, hexagonal mesophases were strongly prevailing. Secondly, it implies that absorption of only a small amount of hydrophilic phase was necessary for these precursor formulations to form hexagonal LCCs in situ . Further, mixed phases of hexagonal LCCs together with lamellar LCCs were formed from precursor formulations containing 80% of the ethanol/lecithin mixture, respectively, but only after a larger addition of hydrophilic phase, i.e. 40–90%. It should be noted that, for precursor formulations containing 80% of the ethanol/lecithin mixture, fan-like textures were less numerous and pronounced as well as that individual Maltese crosses were also observed. Further, comparable results were obtained at a temperature of 37 °C for all four diagrams. The only exception was that precursor formulations from the third and the fourth diagram containing 80% of the ethanol/lecithin mixture already showed fan-like textures along with Maltese crosses when the hydrophilic phase content reached 10%. This indicates that higher temperature induces, to some extent, the formation of hexagonal LCCs together with lamellar LCCs, which is desirable for SC administration. Based on all the findings obtained from the pseudoternary phase construction (i.e. macroscopic appearance) and the corresponding phase behavior analysis (i.e. microstructure), eight precursor formulations, capable of in situ phase transition to hexagonal LCCs upon addition of water, were selected for further characterization studies. The composition of the selected studied precursor formulations is reported in and highlighted in .
Gelation time denotes the time required for a precursor formulation to transform into an in situ formed gel when exposed to excess aqueous medium (Mei et al., ). A rapid sol-gel transition is desired to minimize the possibility of initial burst release as in situ formation of gel retards the release of the incorporated drug. At room temperature, all precursor formulations were clear with good fluidity, but upon contact with aqueous medium heated to 37 °C, they quickly lost their flowability. Among all precursor formulations, a very short gelation time of a few seconds was measured for (E/L)Go50 (2.8 seconds), (E/L)Gl50 (2.5 seconds), (E/L)Go60 (4.1 seconds), and (E/L)Gl60 (3.6 seconds). Further, a slightly longer gelation time was measured for (E/L)Go70 (14.1 seconds) and (E/L)Gl70 (13.3 seconds). The longest sol-gel transition time was determined for (E/L)Go80 (70.0 seconds) and (E/L)Gl80 (45.4 seconds). These results indicate that all precursor formulations would spontaneously transform into an in situ gel at the site of administration upon exposure to the physiological fluid. However, it can be observed that there are certain differences among precursor formulations which are most likely related to their composition. Namely, increasing glycerol monooleate and glycerol monolinoleate content, respectively, leads to a decrease in gelation time, which is also consistent with the literature data (Mei et al., ). The phenomenon can be attributed to glycerol monooleate and glycerol monolinoleate being amphiphilic lipids capable of forming hexagonal LCCs. Consequently, they have the ability to rapidly induce the formation of these highly viscous mesophases.
The ability of a precursor formulation to form and maintain an in situ formed gel upon injection into excess aqueous medium heated to 37 °C was evaluated within gelation test. shows the visual appearance of in situ formed gels at selected predetermined time points, namely, immediately after injection, the initial and final time assessment points (1 hour and 14 days), the time points when equilibrium with water was established (24 and 72 hours), and the time point showing the most prominent morphological changes of in situ formed gels were observed (7 days). In addition, for better visualization, in situ formed gels in vials immediately after injection are also illustrated. Supplementary Figure S1 provides visual representation of in situ formed gels at other predetermined time points, namely 6, 12, 36, 48 hours, and 10 days. The intensity of color of all in situ formed gels was the highest at the first time point, i.e. 1 hour, due to the thorough uptake of the aqueous medium, which began immediately upon contact with it. The most compact forms with noticeably low quantity of uptook aqueous medium were formed by (E/L)Go50 and (E/L)Gl50, which, unlike the milky yellow (E/L)Go80 and (E/L)Gl80, were bright yellow. The least firm and visibly the biggest volumes of in situ formed gels were observed for (E/L)Go80 and (E/L)Gl80, as they uptook high quantity of aqueous medium. (E/L)Go60, (E/L)Gl60, (E/L)Go70, and (E/L)Gl70 typically represented an ‘intermediate stage”. They combined both clear and cloudy areas in color, and their consistency was softer than that of (E/L)Go50 and (E/L)Gl50 and firmer than that of (E/L)Go80 and (E/L)Gl80. With the exception of less intense color and slower process of swelling, after 6, 12, 24, 36, 48, and 72 hours, no significant changes in macroscopic appearance of all in situ formed gels were observed. However, after 7 days notable changes in color and consistency were detected for all in situ formed gels as they started to liquefy and decrease in their size. Over the following days, the process of liquification and erosion was slowly progressing. Interestingly, at the last time point, i.e. 14 days, remaining in situ formed gels of (E/L)Go80 and (E/L)Gl80 settled at the bottom of the vials, while remainings of the other systems were still floating. When comparing all precursor formulations, due to the yellow color of glycerol monooleate and glycerol monolinoleate, respectively, gels containing more lipid phase were more yellow. But what seems to be of major importance, we found that in situ formed gels with a higher lipid content were more compact and eroded more slowly, while in situ formed gels containing a higher amount of the ethanol/lecithin mixture were softer and degraded faster.
As the microstructure of in situ formed gel is a crucial factor influencing the drug release kinetics, the phase transition of precursor formulations upon contact with aqueous medium was monitored using PLM at 37 °C. The analysis was performed at the same time points as the gelation test was carried out. Photomicrographs obtained at the selected predetermine time points, i.e. 1, 24, 72 hours, and 7 and 14 days (see chapter Gelation test) are shown in . Supplementary Figure S2 provides photomicrographs taken at other predetermined time points, namely 6, 12, 36, 48 hours, and 10 days. The phase changes at post-hydration time of precursor formulations with excess aqueous medium revealed dynamic phase transitions, which were caused by rearrangement of molecules within precursor formulations. These microstructural changes can be understood in terms of aqueous self-assembly of amphiphile mixtures explained by their critical packing parameter (CPP). Namely, the self-assembly of single amphiphiles in aqueous medium is driven by a balance between the hydrophobic interactions of the tails and the geometrical packing constraints of the polar head groups. These factors are expressed as CPP = v/al , where v is the volume of the hydrophobic tail, a is the polar head group area, and l is the hydrophobic tail length of the amphiphilic molecule. As a guideline, amphiphiles with a CPP ∼1 usually self-assemble into lamellar LCCs (Engström & Engström, ), a CPP of ∼1.3 is characteristic for bicontinuous cubic mesophases, while amphiphiles with a CPP ∼1.7 form inverted hexagonal mesophases (Larsson, ). In regard to our results, clearly visible and numerous fan-like textures emerging from dark background, were observed for (E/L)Go50, (E/L)Gl50, (E/L)Go60, (E/L)Gl60, (E/L)Go70, and (E/L)Gl70 at the first time point of the assessment and they persisted until the conclusion of the analysis. It appears that hexagonal LCCs were quickly formed from these precursor formulations and that their microstructure was preserved until the final time point of the analysis. These findings can be attributed to the high content of glycerol monooleate and glycerol monolinoleate, respectively, in these precursor formulations. Namely, upon contact with aqueous medium, the polar head groups of the amphiphilic lipid from precursor formulations begin to move more freely. Consequently, these movements induce disorder in the hydrophobic chain of the amphiphilic lipid, leading to an increase of volume of the hydrophobic tail – v . However, the cross-sectional area of the polar head groups stays constant due to the strong hydrogen bonding. Therefore, CPP value increases as v increases and the polar head group area – a and the hydrophobic tail length of the amphiphilic molecule – l remain constant, thereby facilitating phase transition to hexagonal mesophases (Borgheti-Cardoso et al., ; Ferreira, ). When looking at (E/L)Go80 and (E/L)Gl80, hexagonal LCCs were also mainly present at all time points of the assessment. However, it should be noted that here fan-like structures were less pronounced. In addition, Maltese crosses indicative of lamellar LCCs were observed at the initial time point for (E/L)Go80 and (E/L)Gl80, with their presence slowly increasing throughout the analysis. Given that these precursor formulations contained a high content of lecithin/ethanol mixture, its effect was reflected in the resulting mesophases. Namely, a CPP value for lecithin, specifically for phosphatidylcholine as its main component, ranges from 0.5 to 1, meeting the requirement for bilayer formation of lamellar mesophases. Furthermore, ethanol molecules intercalated within phospholipid bilayers of lecithin additionally contributed to the lipid bilayer fluidity (Mkam Tsengam et al., ). As a result, in the case of the abovementioned precursor formulations, the self-assembly of lamellar mesophases was also observed along with formation of hexagonal mesophases.
Swelling behavior of in situ formed gels is another important characteristic influencing the drug release behavior. Therefore, their water uptake kinetics was evaluated at predetermined time points at temperature of 37 °C. The water uptake was monitored over time until equilibrium with water was reached. The determined value represented the maximum water absorption, referred to as water uptake capacity, shown in . The obtained results showed that the water uptake of all in situ formed gels increased rapidly in the first hour upon contact with excess aqueous medium and then gradually leveled off. The equilibrium water absorption for (E/L)Go50, (E/L)Gl50, (E/L)Go60, (E/L)Gl60, and (E/L)Gl70 was determined at 24 hours, while for (E/L)Go70, (E/L)Go80, and (E/L)Gl80 the equilibrium with water was reached after 72 hours. When considering the water capacities of in situ formed gels, the obtained data indicated that the chosen lipid played a pivotal role in their swelling behavior. The lowest water capacity was determined for (E/L)Go50 (5.4%) and (E/L)Gl50 (2.9%) consisting of the highest proportion of glycerol monooleate and glycerol monolinoleate, respectively. Slightly higher water capacities were observed for (E/L)Go60 (15.2%), (E/L)Gl60 (12.1%), and (E/L)Gl70 (13.4%). These results are in good agreement with phase transition analysis within gelation test, where it has been shown that fan-like textures, indicating hexagonal LCCs, are continually present in these in situ formed gels. According to the literature, water channels within hexagonal mesophases are closed to the external environment, hence water diffusion is retarded (Chavda et al., ). Further, moderately higher water capacity was determined for (E/L)Go70 (25.2%), while (E/L)Go80 (82.2%) and (E/L)Gl80 (74.3%) stood out with the highest water capacity. Again, these results correlate well with phase transition analysis within gelation test, revealing that in addition to hexagonal LCCs, lamellar mesophases are also present in (E/L)Go80 and (E/L)Gl80. It is known that lamellar LCCs usually absorb more water (Alfutimie et al., ). When looking at all the results together, another interesting finding can be observed. Namely, in situ formed gels containing glycerol monooleate appeared to absorb a higher amount of water compared to those containing glycerol monolinoleate. This overall trend is important to note, as it seems to be also reflected in the results of the in vitro release testing presented later in the study.
DSC analysis was performed to elucidate intermolecular interactions and water state within in situ formed gels after reaching equilibrium with water. Evaluation was performed based on the crystallization (T c ) and melting (T m ) temperatures visible in the crystallization and the melting curves as well as the enthalpies of crystallization (ΔH c ) and melting (ΔH m ) derived by integrating the areas under the corresponding peaks in the DSC thermograms. Initially, assessment was carried out for individual compounds, i.e. ethanol, lecithin, glycerol monooleate, and glycerol monolinoleate, and bidistilled water. On the crystallization curves of individual components , no thermal events were observed for ethanol or lecithin. However, for glycerol monooleate, a minor broad exothermic peak at T c1 = 15.7 °C (ΔH c1 = 0.35 J/g) plus a noticeable exothermic peak at T c2 = −0.8 °C (ΔH c2 = 133.0 J/g) were detected. For glycerol monolinoleate, a small exothermic triple peak appeared (T c1 = −16.3 °C, ΔH c1 = 1.2 J/g, T c2 = −22.9 °C, ΔH c2 = 6.7 J/g, T c3 = −29.2 °C, ΔH c3 = 5.6 J/g). The observed peaks of both lipids can be attributed to the rearrangement and/or crystallization of glycerol monooleate and glycerol monolinoleate molecules, respectively (Chauhan et al., ). Linolenic acid (C 18:2 ) has one more double bond than oleic acid (C 18:1 ), which contributes to its higher degree of unsaturation and greater mobility. This increased mobility is evidenced by more crystallization peaks, as observed in the DSC thermograms (Nyame Mendendy Boussambe et al., ). Regarding bidistilled water, at T c1 = −20.1 °C (ΔH c1 = 239.6 J/g) a sharp exothermic peak appeared, coinciding with the crystallization of supercooled water. Next, on the melting curves of individual components , again no thermal events were observed for lecithin. Nevertheless, in case of ethanol an endothermic peak was detected at T m1 = 76.2 °C (ΔH m1 = −775.5 J/g) corresponding to its evaporation. In the case of lipid components, their melting was characterized by small double endothermic peaks. More specifically, at T m1 = 3.3 °C (ΔH m1 = −15.4 J/g) and T m2 = 13.7 °C (ΔH m2 = −12.6 J/g) for glycerol monooleate, and at T m1 = −15.6 °C (ΔH m1 = −13.5 J/g) and T m2 = 4.9 °C (ΔH m2 = −6.9 J/g) for glycerol monolinoleate. The thermal events of bidistilled water were observed at T m1 = −0.3 °C (ΔH m1 = −278.4 J/g), attributed to ice melting, followed by a broad endothermic peak at T m2 = 97.9 °C (ΔH m2 = −1709.2 J/g), ascribed to its evaporation. In the next step, evaluation of in situ formed gels after reaching equilibrium with water was performed with special attention given to the water state within them. Water, located near the polar heads of amphiphilic molecules in LCCs, exhibits different thermal properties due to interactions that reduce its degrees of freedom when compared to water that is more distant from the polar heads. Consequently, water molecules forming stronger interactions with amphiphilic molecules solidify at lower temperatures compared to water with weaker interactions, resulting in a lower enthalpy of freezing, sometimes even below the detection limit. Based on this water is classified as non-freezable, freezable interlamellar bound water, and freezable bulk water (Ezrahi et al., ). displays the crystallization curves and shows the crystallization enthalpies of in situ formed gels after equilibrium with water was reached. As regards the crystallization curves of (E/L)Go50 and (E/L)Gl50, two wide exothermic peaks in the range of T c = −18.5 °C to T c = −43.1 °C with small areas under the curve appeared. It seems plausible that herein most of the water was located around the polar headgroups of ethanol and lecithin, with almost no free water in the in situ formed gel. Further, in the case of (E/L)Go60, (E/L)Gl60, (E/L)Go70, and (E/L)Gl70, we detected two exothermic peaks between in the range of T c = −16.2 °C to T c = −44.7 °C, representing the crystallization of free water and bound water from the second hydration layer. In these in situ formed gels, water was present around the polar headgroups of ethanol and lecithin in addition to free water within the water channels, indicating that the polar headgroups were already saturated with water molecules. Further, it should be emphasized that certain differences were observed among (E/L)Go70 and the other listed in situ formed gels. Namely, area under the first exothermic peak at approximately −20 °C, attributed to free water within the system, was 3- to 5-times larger in the case of (E/L)Go70, which corresponds well with the results of water uptake evaluation and is also shown in the in vitro release testing presented later in the study. In regard to the crystallization curves of (E/L)Go80 and (E/L)Gl80, only one exothermic peak appeared at −22.6 °C and 22.8 °C, respectively, which in terms of the size of the area under the curve and shape, most closely resembles the reference peak of bidistilled water. It can be postulated that the polar headgroups of ethanol and lecithin are already fully saturated with water molecules of the first and the second hydration layer and that a significant amount of absorbed water is in the form of free water within the water channels. It seems plausible that this free water mostly belongs to lamellar mesophases, which were detected in addition to hexagonal LCCs by PLM analysis of these in situ formed gels. To note, all of these findings are in good agreement with results of gelation test and water uptake evaluation, which confirm the lowest water absorption of (E/L)Go50 and (E/L)Gl50, contrary to (E/L)Go80 and (E/L)Gl80 with the highest water uptake. shows the melting curves, and presents the melting enthalpies of in situ formed gels after equilibrium with water was reached. Melting of ice formed within the cooling cycle of the analysis was noted at approximately 0 °C. In addition, evaporation of ethanol was detected at approximately 78 °C, while water evaporated at approximately 100 °C. The obtained melting curves confirm trends observed from the crystallization curves, where the positions and areas under the curves were directly proportional to the content of absorbed water within in situ formed gels.
Microstructure evaluation of precursor formulations as well as in situ formed gels after reaching equilibrium with water was further upgraded by rheological tests, which provided additional insights into their flow behavior after applied stress as well as viscoelastic characteristics. These measurements contributed to a comprehensive understanding of the structural changes going along with sol–gel transition in addition to structural analyses performed using PLM and DSC. Firstly, rotational measurements were performed to elucidate the flow behavior of a system subjected to applied stress, offering an insight into microstructural alterations upon SC administration. The viscosity curves of all precursor formulations obtained at 25 °C demonstrated a constant viscosity regardless of increasing shear rate. This finding confirm that all precursor formulations exhibited Newtonian fluid behavior, a desirable feature for injectables designed for SC administration. When comparing the viscosities of precursor formulations at the lowest shear rate, a positive correlation between lipid content and viscosity was revealed. However, it is important to note that viscosities of all precursor formulations ranged from 17.0 cP to 36.9 cP, being far below 50 cP, therefore confirming their suitability for SC injection (Miller et al., ). Precursor formulations with glycerol monolinoleate as a lipid phase generally exhibited lower viscosities. Further, rotational tests were also performed for in situ formed gels after reaching equilibrium with water at a temperature of 37 °C. Their viscosities decreased with increasing shear rate until they reached a constant value at high shear rates, hence all in situ formed gels can be classified as non-Newtonian pseudoplastic systems, which is also consistent with our expectations. Upon evaluating the viscosities of in situ formed gels, the viscosity values of in situ formed gels were prominently higher when compared to precursor formulations. In addition, if negligible variations in viscosity values at the lowest shear rate (1 s −1 ) for precursor formulations were detected, notable variations were observed for in situ formed gels which can be explained with their spontaneous formation. Therefore, viscosities for in situ formed gels are presented at 2 s −1 with similar trend to that observed for their respective precursor formulations. More specifically, viscosity values ranged from 698.0 Pa·s to 111.9 Pa·s. In situ formed gels containing glycerol monolinoleate as a lipid phase in general exhibited lower viscosities. All of these results correlate well with gelation test and the corresponding PLM analysis. Further, oscillatory shear frequency sweep measurements were performed for in situ formed gels after reaching equilibrium with water at 37 °C , as they provide important information regarding the viscoelastic properties of a system corresponding to its network structure. Therefore, rheological parameters, including storage (G′) and loss (G″) moduli, as well as complex viscosity (η*), across various angular frequencies were recorded. The G′ modulus reflects elastic properties of a system, with high values demonstrating a system with strong elasticity and structure, while high G’’ values suggest a predominantly viscous, liquid-like material. Depending on the dominant modulus, a system can be classified as either elastic or viscous. The G’ modulus was generally higher than the G″ modulus with increasing frequency, whereas complex viscosity was decreasing with increasing frequency in case of all in situ formed gels, indicating predominantly elastic behavior. This is characteristic of gel-like systems and can be attributed to the well-organized microstructure of the LCCs. More specifically, in the case of (E/L)Go50, (E/L)Gl50, (E/L)Go60, (E/L)Gl60, and (E/L)Gl70, the G′ and G″ moduli were strongly enhanced with increasing frequency. This observed rheological pattern is representative of hexagonal LCCs (Xingqi et al., ) and is good agreement with PLM photomicrographs. Furthermore, similar curves were also observed for (E/L)Go70, whereby the G′ and G″ moduli were less enhanced with increasing frequency, indicating a less dense network of hexagonal LCCs. To note, less strong interactions between surfactant molecules and water were also confirmed by DSC measurements for this in situ formed gel. Next, the rheological behavior of (E/L)Go80 and (E/L)Gl80 also correlated well with the results of other assessments. Namely, in the case of these two in situ formed gels, both the G′ and G″ moduli were nearly independent of the angular frequency over the entire range investigated with the large gap between the both curves, suggesting the coexistence of hexagonal mesophases along with lamellar LCCs (Mistry et al., ), as we had also anticipated based on PLM analysis.
Selection of the release medium The newly developed in situ forming liquid crystalline systems were designed for the sustained release of the peptide drug Tα1 with inherent poor stability. Therefore, to ensure an optimal in vitro release testing experiment for the period of 2 weeks, preliminary studies were conducted to assess the influence of the release medium on the Tα1’s stability. In addition, given that the experiment was carried out at 37 °C and that the samples were stored for subsequent UHPLC analysis after sampling, the effect of storage temperature on the Tα1’s stability was also evaluated . Within the assessment of release media, we examined the Tα1’s stability regarding the absence (ultrapure water) or the presence of ions in various buffers (PBS, simulated body fluid), the pH value (6.8 and 7.4) and the proportion of ethanol (5%, 100% (m/m)). Additionally, as part of the temperature stability testing, the Tα1’s stability was evaluated at the following temperatures for all release media: −20 °C (freezer temperature), 8 °C (refrigerator temperature), 25 °C (room temperature), and 37 °C (body temperature). At −20 °C and 8 °C, the Tα1’s stability was adequate within all tested combinations of release media. However, evident differences in stability were observed at elevated temperatures. To note, we found that by adding a small proportion of ethanol improved stability of the peptide drug Tα1 in the release medium was obtained, proving its key influence on Tα1’s stability. Considering the observed finding and the literature data reporting that a slightly acidic pH improves Tα1’s stability (Dai et al., ), PBS (pH = 6.8) containing 5% (m/m) of ethanol was selected as the most appropriate release medium at all tested temperatures over the entire testing period. The potential effect of ethanol on in situ depot formation was investigated by PLM microstructural examination of the in situ formed gels exposed to the release medium containing 5% (m/m) ethanol after equilibrium with water was reached (data not shown). It has been demonstrated that this proportion of ethanol had no effect on depot formation. In vitro release of the Tα1 from in situ formed gels Achieving the sustained release of the peptide drug Tα1 was one of the pivotal aspects we focused on in the development of the in situ forming liquid crystalline systems in this study. Thus , in vitro release testing was performed to evaluate their potential for minimization of the Tα1’s dosing frequency that could greatly improve patient compliance upon clinical translation of the systems. displays the cumulative release of the peptide drug Tα1 in vitro from the in situ formed gels over a period of 2 weeks. All the studied in situ formed gels demonstrated the sustained release profiles; however, noticeable differences were observed among them. (E/L)Go80 and (E/L)Gl80 exhibited the greatest total drug release after 2 weeks with 84.2% and 93.4%, respectively. Further, (E/L)Go70 demonstrated 19.1% of released Tα1 after 2 weeks. It is important to note that this represented 2- to 4-times greater total drug release when compared to other in situ formed gels. Namely, they exhibited comparable amount of released drug after 2 weeks, being 8.4% for (E/L)Gl70, 8.3% for (E/L)Go50, 7.8% for (E/L)Gl50, 5.8% for (E/L)Gl60 and 5.5% (E/L)Go60. The observed differences can be explained by bidirectional relationship among variables influencing the drug release mechanism from LCCs. Specifically, the hydrophilic characteristics of the peptide drug Tα1 (Goldstein et al., ), which determine the affinity for the water channels of LCCs, as well as the composition and the microstructure of the LCCs with the interrelated water uptake capacity. It is known from the literature that the release of hydrophilic drugs from lamellar LCCs, which are in general more highly hydrated mesophases, is more rapid than from hexagonal LCCs with relatively low water absorption. This phenomenon can be attributed to an increase in the water channels available for release of hydrophilic drugs with increasing water content within the system (Borgheti-Cardoso et al., ; Elnaggar et al., ). In the present study, the coexistence of hexagonal mesophases along with lamellar LCCs was confirmed by PLM analysis and oscillatory measurements for (E/L)Go80 and (E/L)Gl80. Consequently, the water uptake capacity of (E/L)Go80 and (E/L)Gl80 was exceptionally high and the release was greater than that of the other gels formed in situ . In keeping with this, their release profiles were also consistent with the explanation provided above. Namely, the other in situ formed gels formed only hexagonal mesophases, resulting in their noticeably sustained release profiles. Among these, (E/L)Go70 demonstrated a moderately greater total drug release, which corresponded with its higher water uptake capacity and the associated larger proportion of free water, as confirmed by DSC measurements as well. In other words, larger amount of free water within water channels of hexagonal mesophases present in (E/L)Go70 enabled moderately greater release of the hydrophilic peptide drug Tα1. However, it is still necessary to take into account that (E/L)Go70 formed only hexagonal mesophases and that water channels within them are closed to the external environment, hence water diffusion is retarded (Chavda et al., ). Other in situ formed gels exhibiting solely hexagonal mesophases showed similar water uptake capacities and similar intermolecular network, as identified by DSC analysis, their amount of the released peptide drug Tα1 was comparable. Further, the peptide drug Tα1’s secondary structure using CD spectroscopy was examined. Considering the literature indicating that Tα1 is an intrinsically disordered peptide at neutral pH and body temperature in water, with various solvents capable of inducing structural changes (Hoch & Volk, ), its structural stability was systematically evaluated in different samples throughout processing. Supplementary Figure S3A shows the dichroic profile of the peptide drug Tα1 in ethanol for incorporation into formulation, indicating β-sheet conformation (Greenfield, ). Further, the CD spectrum of the peptide drug Tα1 in the release medium post-drug release testing, shown in Supplementary Figure S3B , indicates that the peptide adopted a random coil conformation in the aqueous environment, which aligns well with findings from (Grottesi et al., ). In addition, it also correlates with the CD spectra obtained for the dissolved lyophilisate of the peptide drug Tα1 in the release medium and in PBS itself ( Supplementary Figure S3C and S3D ). Taken together, these results confirm that the peptide drug Tα1 adopts and maintains its native conformation, characteristic of aqueous environment, in the release medium after the completion of the in vitro release testing. Notably, the conformational changes in different environments may serve as structural prerequisites for Tα1’s interaction with lymphocyte membranes, potentially representing the initial event in lymphocyte activation during immune response modulation, thereby highlighting the functional relevance (Grottesi et al., ). To conclude, the results of the in vitro release testing demonstrated that adjusting the composition of precursor formulations facilitates the regulation of in situ formed gels’ microstructure, thereby controlling the release profiles of the incorporated peptide drug Tα1. Furthermore, the release profiles obtained over a period of 2 weeks imply the potential of the in situ formed gels innovated in this study to prolong the peptide drug Tα1’s release and notably minimize its dosing frequency. Nevertheless, it is important to note that the % of the released peptide drug Tα1 increases only slightly after initial release observed in the first days of the in vitro release testing. A similar release behavior has also been reported for the peptide drug leuprolide acetate from liquid crystalline hexagonal mesophases (Báez-Santos et al., ). Upon administration, the SC tissue pressure along with flow of the SC interstitial fluid perfusing the in situ formed depots is expected to assist the erosion of the in situ formed gel matrix and enhance the drug release, though (Torres-Terán et al., ).
The newly developed in situ forming liquid crystalline systems were designed for the sustained release of the peptide drug Tα1 with inherent poor stability. Therefore, to ensure an optimal in vitro release testing experiment for the period of 2 weeks, preliminary studies were conducted to assess the influence of the release medium on the Tα1’s stability. In addition, given that the experiment was carried out at 37 °C and that the samples were stored for subsequent UHPLC analysis after sampling, the effect of storage temperature on the Tα1’s stability was also evaluated . Within the assessment of release media, we examined the Tα1’s stability regarding the absence (ultrapure water) or the presence of ions in various buffers (PBS, simulated body fluid), the pH value (6.8 and 7.4) and the proportion of ethanol (5%, 100% (m/m)). Additionally, as part of the temperature stability testing, the Tα1’s stability was evaluated at the following temperatures for all release media: −20 °C (freezer temperature), 8 °C (refrigerator temperature), 25 °C (room temperature), and 37 °C (body temperature). At −20 °C and 8 °C, the Tα1’s stability was adequate within all tested combinations of release media. However, evident differences in stability were observed at elevated temperatures. To note, we found that by adding a small proportion of ethanol improved stability of the peptide drug Tα1 in the release medium was obtained, proving its key influence on Tα1’s stability. Considering the observed finding and the literature data reporting that a slightly acidic pH improves Tα1’s stability (Dai et al., ), PBS (pH = 6.8) containing 5% (m/m) of ethanol was selected as the most appropriate release medium at all tested temperatures over the entire testing period. The potential effect of ethanol on in situ depot formation was investigated by PLM microstructural examination of the in situ formed gels exposed to the release medium containing 5% (m/m) ethanol after equilibrium with water was reached (data not shown). It has been demonstrated that this proportion of ethanol had no effect on depot formation.
Achieving the sustained release of the peptide drug Tα1 was one of the pivotal aspects we focused on in the development of the in situ forming liquid crystalline systems in this study. Thus , in vitro release testing was performed to evaluate their potential for minimization of the Tα1’s dosing frequency that could greatly improve patient compliance upon clinical translation of the systems. displays the cumulative release of the peptide drug Tα1 in vitro from the in situ formed gels over a period of 2 weeks. All the studied in situ formed gels demonstrated the sustained release profiles; however, noticeable differences were observed among them. (E/L)Go80 and (E/L)Gl80 exhibited the greatest total drug release after 2 weeks with 84.2% and 93.4%, respectively. Further, (E/L)Go70 demonstrated 19.1% of released Tα1 after 2 weeks. It is important to note that this represented 2- to 4-times greater total drug release when compared to other in situ formed gels. Namely, they exhibited comparable amount of released drug after 2 weeks, being 8.4% for (E/L)Gl70, 8.3% for (E/L)Go50, 7.8% for (E/L)Gl50, 5.8% for (E/L)Gl60 and 5.5% (E/L)Go60. The observed differences can be explained by bidirectional relationship among variables influencing the drug release mechanism from LCCs. Specifically, the hydrophilic characteristics of the peptide drug Tα1 (Goldstein et al., ), which determine the affinity for the water channels of LCCs, as well as the composition and the microstructure of the LCCs with the interrelated water uptake capacity. It is known from the literature that the release of hydrophilic drugs from lamellar LCCs, which are in general more highly hydrated mesophases, is more rapid than from hexagonal LCCs with relatively low water absorption. This phenomenon can be attributed to an increase in the water channels available for release of hydrophilic drugs with increasing water content within the system (Borgheti-Cardoso et al., ; Elnaggar et al., ). In the present study, the coexistence of hexagonal mesophases along with lamellar LCCs was confirmed by PLM analysis and oscillatory measurements for (E/L)Go80 and (E/L)Gl80. Consequently, the water uptake capacity of (E/L)Go80 and (E/L)Gl80 was exceptionally high and the release was greater than that of the other gels formed in situ . In keeping with this, their release profiles were also consistent with the explanation provided above. Namely, the other in situ formed gels formed only hexagonal mesophases, resulting in their noticeably sustained release profiles. Among these, (E/L)Go70 demonstrated a moderately greater total drug release, which corresponded with its higher water uptake capacity and the associated larger proportion of free water, as confirmed by DSC measurements as well. In other words, larger amount of free water within water channels of hexagonal mesophases present in (E/L)Go70 enabled moderately greater release of the hydrophilic peptide drug Tα1. However, it is still necessary to take into account that (E/L)Go70 formed only hexagonal mesophases and that water channels within them are closed to the external environment, hence water diffusion is retarded (Chavda et al., ). Other in situ formed gels exhibiting solely hexagonal mesophases showed similar water uptake capacities and similar intermolecular network, as identified by DSC analysis, their amount of the released peptide drug Tα1 was comparable. Further, the peptide drug Tα1’s secondary structure using CD spectroscopy was examined. Considering the literature indicating that Tα1 is an intrinsically disordered peptide at neutral pH and body temperature in water, with various solvents capable of inducing structural changes (Hoch & Volk, ), its structural stability was systematically evaluated in different samples throughout processing. Supplementary Figure S3A shows the dichroic profile of the peptide drug Tα1 in ethanol for incorporation into formulation, indicating β-sheet conformation (Greenfield, ). Further, the CD spectrum of the peptide drug Tα1 in the release medium post-drug release testing, shown in Supplementary Figure S3B , indicates that the peptide adopted a random coil conformation in the aqueous environment, which aligns well with findings from (Grottesi et al., ). In addition, it also correlates with the CD spectra obtained for the dissolved lyophilisate of the peptide drug Tα1 in the release medium and in PBS itself ( Supplementary Figure S3C and S3D ). Taken together, these results confirm that the peptide drug Tα1 adopts and maintains its native conformation, characteristic of aqueous environment, in the release medium after the completion of the in vitro release testing. Notably, the conformational changes in different environments may serve as structural prerequisites for Tα1’s interaction with lymphocyte membranes, potentially representing the initial event in lymphocyte activation during immune response modulation, thereby highlighting the functional relevance (Grottesi et al., ). To conclude, the results of the in vitro release testing demonstrated that adjusting the composition of precursor formulations facilitates the regulation of in situ formed gels’ microstructure, thereby controlling the release profiles of the incorporated peptide drug Tα1. Furthermore, the release profiles obtained over a period of 2 weeks imply the potential of the in situ formed gels innovated in this study to prolong the peptide drug Tα1’s release and notably minimize its dosing frequency. Nevertheless, it is important to note that the % of the released peptide drug Tα1 increases only slightly after initial release observed in the first days of the in vitro release testing. A similar release behavior has also been reported for the peptide drug leuprolide acetate from liquid crystalline hexagonal mesophases (Báez-Santos et al., ). Upon administration, the SC tissue pressure along with flow of the SC interstitial fluid perfusing the in situ formed depots is expected to assist the erosion of the in situ formed gel matrix and enhance the drug release, though (Torres-Terán et al., ).
This study has shown that lyotropic liquid crystals represent a flexible and versatile platform that enables the regulation of macro and microstructure, and thereby the release profile and overall performance, through minimal adjustments in the component ratios. We report here the development of an in situ forming system for SC administration for potential sustained release of the peptide drug Tα1. Through a systematic design, we accomplished all the main objectives that we set at the beginning of the study. Firstly, nonaqueous precursor formulations demonstrated optimal rheological properties for SC injection. Further, an easy and quick in situ phase transition of precursor formulations to hexagonal LCCs was obtained. The change was triggered by water absorption, which represents the least invasive stimulus for phase transition occurrence. Finally, the obtained release kinetics of the peptide drug Tα1 from in situ formed gels imply a prolonged release behavior that could notably minimize its dosing frequency. These results highlight the great potential of the newly developed in situ forming liquid crystalline systems as injectable long-acting depots for SC administration of the peptide drug Tα1, promoting patient adherence.
Supplementary_material.docx
|
Pediatric inflammatory bowel disease and the effect of COVID-19 pandemic on treatment adherence and patients’ behavior | 6f84a239-853d-4ba8-86e5-5b45e38e32a5 | 7807409 | Pediatrics[mh] | The novel coronavirus disease (COVID-19) is rapidly spreading, striking millions of people worldwide. The disease, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was announced as a global pandemic by the World Health Organization (WHO) on March 2020, following an outbreak, which initiated in China on December 2019. In order to control the virus transmission, the Israeli authorities implemented wide-scale social distancing measures including schools shut-down, traffic and travel restrictions, discontinuation of nonessential work and commerce, and a complete national curfew during national holidays. The further gradual easing of these restrictions was followed including the reopening of kindergartens and schools on May 17th. Medical care provision was changed during the COVID-19 pandemic, using more telemedicine-based practice as guided by the Center of Disease Control and Prevention (CDC). These changes affected all patients with chronic diseases, including patients with inflammatory bowel disease (IBD), who experienced changes in standard management, with less frequent outpatient visits. Patients with IBD were not shown to be more susceptible to severe COVID-19 unless treated with high-dose corticosteroids and the pediatric IBD Porto Group of the European Society of Paediatric Gastroenterology, Hepatology, and Nutrition (ESPGHAN) recommended to continue medical treatments including biologic agents in pediatric patients with IBD. Accordingly, treating physicians of patients with pediatric IBD were instructed by the Israeli Society of Paediatric Gastroenterology, Hepatology, and Nutrition (ISPGHAN) to adopt a non-interruption strategy of IBD medical treatment and to recommend attendance of kindergartens and schools once approved for the general population by the Israeli Ministry of Health (MOH). We aimed to assess the effect of the COVID-19 pandemic on changes in health care provision, fear of infection, a continuation of medical therapies, and adherence to MOH instructions in pediatric patients with IBD.
A cross-sectional study based on a structured telephone interview was conducted among all pediatric IBD patients treated in the Institute of Gastroenterology, Nutrition and Liver Diseases in Schneider Children’s Medical Center of Israel. Inclusion criteria were: patients between the ages of 0–18 years with an established diagnosis of IBD. Enrollment occurred between May 31st and July 9th, 2020. Data collection included demographic data, diagnosis, and current treatment. The survey questionnaire was designed especially for this study and included 13 questions on the behavior and treatment adherence of the patients: 8 questions with a 5-point Likert-scale score, four closed-ended questions and one open-ended question regarding the impact of the COVID-19 pandemic. The survey was answered exclusively by one of the patients’ parents in children younger than 10 years, while children older than 10 years old could answer the survey with their parents. Continuous variables are presented as mean ± standard deviation for normally distributed variables and as median with interquartile range (IQR) for abnormally distributed variables. Categorical variables are presented as frequency and percentage. Categorical variables were compared using the chi-square test. p -values < 0.05 were considered statistically significant. Ethical considerations The study protocol was approved by the local Institutional Review Board. Parents of all subjects gave informed consent to participate in the survey.
The study protocol was approved by the local Institutional Review Board. Parents of all subjects gave informed consent to participate in the survey.
Out of a total of 253 pediatric patients with IBD, eligible according to the inclusion criteria, 9 patients refused to take part in the survey leaving a total of 244 patients who participated in the study. The cohort characteristics are depicted in Table . The study population included 117 (48%) females with a median age of 15.3 (IQR 12.6–17.1) years. Crohn’s disease (CD) was diagnosed among 170 (69.7%) patients, Ulcerative colitis (UC) among 67 (27.5%) patients and 7 (2.8%) patients had IBD-unclassified. Median disease duration was 3.5 (IQR = 1.5–5.5) years. The survey questions and results are presented in Table . Most patients (169, 69.3%) did not feel any difference in health services provision during the COVID-19 pandemic while 43 (17.6%) noted a deterioration, and 32 (13.1%) noted improvement. The majority of patients (181, 74.2%) reported that there was no change in their gastroenterologist availability whereas 40 (16.4%) patients reported an improvement in treating gastroenterologist availability during the COVID-19 pandemic. Fear of severe COVID-19 infection due to IBD or IBD medications was reported by 198 (81.1%) patients, with 110 (45.1%) being very concerned about obtaining a severe COVID-19 infection. A total of 228 (93.4%) patients reported that they strictly obeyed the MOH guidance. Additional protective measures were taken by 120 (49.2%) patients. The most common measure was complete avoidance of school or kindergarten despite approval of attendance by the MOH, which was taken by 91 (37.3%) patients. Voluntary lockdown (including avoidance of school attendance, social contacts, and staying at home) was implemented by 19 (7.8%) patients and in 4 cases the lockdown included all family members. Other protective measures included: avoidance of school attendance by siblings (7, 2.9%), avoidance of social contacts (4, 1.6%), frequent disinfection measures (3, 1.2%), repeated COVID-19 PCR tests (2, 0.8%), and parental retiring from work (1, 0.4%) (Fig. ). The majority of patients (134, 54.9%) reported that they were not adequately informed about the potential effect of COVID-19 on patients with IBD. A subgroup of patients (116, 47.5%) reported that they are not comfortable attending regular clinics during the COVID-19 outbreak, while 84 (34.4%) patients missed their clinic visit. A higher proportion of patients (178, 73%) declared a concern to attend the emergency room (ER) in case of IBD exacerbation. Avoidance of pharmacy visits was reported by 28 (11.5%) patients. Discontinuation or change of treatment was considered by 22 (9%) patients whereas only 7 (2.9%) changed or discontinued their IBD medications due to COVID-19. Among patients who discontinued treatment 2 (0.8%) patients stopped anti-TNF therapy, 2 (0.8%) patients discontinued 5-ASA treatment, 2 patients (0.8%) stopped antibiotic therapy and 1 (0.4%) patient discontinued exclusive enteral nutrition treatment. No significant statistical difference was found comparing patients with UC to patients with CD (data not shown), except for a higher proportion of patients with CD who reported being informed about COVID-19 effect on patients with IBD (70.2% vs. 34.4%, p = 0.03). No significant statistical difference was found comparing patients treated with biologic agents and patients treated with immunomodulators (with or without steroids). Comparing patients in primary school and younger ( n = 110) to those in high school ( n = 134), younger patients tended to take more additional protective measures: 63 (57.3%) patients versus 57 (42.5%), respectively ( p = 0.02). No other statistically significant differences were found between these groups (data not shown). No significant statistical difference was found comparing young adults at the age of 18 ( n = 26), who answered the survey by themselves, to younger patients who filled the survey with a parent ( n = 218), except for the difference in obeying MOH restrictions and fear of attending ER. Young adults stated less frequently that they strictly obeyed the MOH restrictions and guidance: 15 (57.7%) vs. 158 (72.5%), whereas a higher proportion of young adults stated that they obeyed the MOH restrictions and guidance only to a medium extent (4 (15.7%) vs. 9 (4.1%), p = 0.01). Young adults stated more frequently that they were not afraid at all to attend the ER during the COVID-19 pandemic (5 (19.2%) vs. 5 (2.3%), p < 0.01).
In this study, we present the effect of the COVID-19 pandemic on more than 240 pediatric patients with IBD. Some degree of fear of severe SARS-CoV-2 infection was reported by the majority of patients corresponding to a high proportion of patients feeling uninformed about the possible effect of COVID-19 on IBD. In contrast, only 27% of adult patients with IBD in Germany reported that they are afraid that their medications will worsen COVID-19 infection. In a global survey among adult patients with IBD 30% believed that IBD diagnosis predisposes to an increased risk of COVID-19 infection while 64% of patients stated that immunosuppressive medications were associated with a higher risk of infection. The higher proportion of pediatric patients concerned about severe COVID-19 infection presented in our study may reflect parental anxiety or merely lack of information. COVID-19 pandemic had a major impact on the management of patients with chronic diseases, shifting toward telemedicine. As our institute adopted this strategy, partially filling the gap in patients’ care, it was reasonable to find that most patients did not feel any change in health care provision. Despite the inability to perform a physical examination, telemedicine should be considered as a feasible option for patients with IBD as it can provide acceptable medical care and even has the potential to improve compliance and diminish the loss of follow-up. The COVID-19 pandemic may serve as a window of opportunity for integrating telemedicine in pediatric IBD practice. Concerns about visiting outpatient clinics and ER during the outbreak were reported by most of the patients in our cohort. These findings are concerning, particularly in view of recent reports describing delayed medical treatment in common medical conditions during the COVID-19 pandemic. Efforts should be invested in conveying the message that the risk of delaying essential medical treatment outweighs the small risk of contracting the virus in a controlled environment. Reassurance and education measures should be employed to counteract fear from attending medical services such as ER, specifically in parents of young patients. Additional protective measures were taken by almost 50% of patients in our cohort, mostly staying at home despite the re-opening of schools and kindergartens. Unnecessary isolation of children who are not at increased risk of the severe disease might have a detrimental impact on a child’s healthcare and wellbeing, emphasizing the need for patient’s guidance and reassurance by all treating disciplines. Those additional measures were more common among younger patients suggesting a significant psychological effect of COVID-19 on parents of young children. On the other hand, young adults with IBD should be encouraged to obey MOH restrictions and guidance, as a higher proportion of those patients stated that they did not fully comply with MOH regulations. Only a minority of patients reported that they considered changing or discontinue IBD medications, and even a lower percentage actually did so. These findings are similar to recent reports in adults with IBD in whom about 4% of medication discontinuation was observed. , Despite the high treatment adherence, these findings should not be ignored, as a cessation of treatment may have significant implications on disease control and potential complications. Our study has several limitations including the lack of a control group, which did not allow a comparison with other pediatric populations, and interviewing only one parent, preventing a full perspective for each case. In conclusion, we found several distinct features of the COVID-19 pandemic effect on pediatric patients with IBD including a high rate of fear of severe COVID-19 infection, fear of attending necessary medical facilities, and a high rate of avoidance of social activities. Our findings emphasize the importance of providing patients with the most updated information, not only regarding their chronic condition but also regarding the effect of global medical issues on their disease and treatment, particularly during a time of the global pandemic. Establishing open patient-physician communication may motivate patients to raise questions regarding their concerns, enabling physicians to address these issues prior to a decline in medical adherence. We believe that patients’ feelings and behaviors can be improved by reassurance and pro-active patient education.
|
Effect of feedback-integrated reflection, on deep learning of undergraduate medical students in a clinical setting | eb2aa819-555b-4ce4-8d36-82354b809a78 | 11731358 | Gynaecology[mh] | Learning is a multifaceted process that extends beyond the mere acquisition of knowledge, to include the ability to critically evaluate, reflect, and apply that knowledge effectively. In this context, reflection and feedback emerge as two crucial metacognitive strategies to enhance the learning process . Both have substantial evidence supporting their role in promoting deep or meaningful learning. Meaningful learning, as described by David Ausubel, takes place when learners connect new information to their existing cognitive structures. Unlike rote memorization, it emphasizes understanding concepts, making connections, and applying knowledge in diverse contexts . This is especially vital in medical education, as it fosters the development of clinical reasoning and problem-solving abilities, which are essential for professional competence and delivering effective patient care . Reflection fosters self-regulated learning by enabling learners to critically evaluate their performance, identify gaps, and make plans to improve . Feedback, in turn, provides external insights that complement reflection, helping learners recognize their strengths and weaknesses, adjust their learning strategies, and enhance clinical reasoning and decision-making skills . Together, these tools form a powerful combination for fostering meaningful learning, as they help students connect theoretical knowledge with practical application, enabling them to deepen their understanding . In medical education, feedback is traditionally provided post-assessment as a part of formative assessment, focusing on performance evaluation. However, integrating feedback into the learning process synchronously, combined with reflective practices, can significantly enhance self-regulated learning . Most educational literature focuses on feedback and reflection as separate metacognitive processes and only few studied their combined benefits. For instance, the U.K. Foundation Programme integrates reflective practice with structured feedback, demonstrating improvements in clinical reasoning and the development of lifelong learning habits among trainees . Similarly, reflective portfolios in the U.S., when paired with personalized feedback, have been shown to enhance self-regulated learning and critical assessment skills in medical students . Statement of the problem Despite the recognized importance of reflection and feedback in medical education globally, their combined implementation remains limited. Most studies on these strategies rely on qualitative approaches, lacking robust quantitative evidence to evaluate their effectiveness. Moreover, the use of feedback combined with reflection as a metacognitive learning strategy in undergraduate medical education particularly in clinical settings, is underexplored . This gap in evidence necessitates an investigation to determine the combined effectiveness of reflection and feedback in enhancing deep or meaningful learning. Such an evidence-based approach can provide critical insights for improving clinical education in resource-limited settings. Conceptual framework This study is grounded in Self-Regulated Learning Theory, which highlights learners’ active monitoring, evaluation, and regulation of their learning for improved outcomes . Reflection enables critical analysis of experiences, while feedback addresses knowledge gaps and guides future learning strategies. Together, these iterative mechanisms promote structured engagement and enhanced learning as illustrated in Fig. . Rooted in constructivism and pragmatism, the framework views reflection as a tool for deeper learning and feedback as a practical means to refine learning approaches. Research question Does the integration of feedback with reflection significantly enhance meaningful learning, as measured by higher-order MCQ scores, among undergraduate medical students compared to reflection alone? Objective of the study To evaluate the impact of feedback-integrated reflection versus reflection alone on higher-order MCQ scores among undergraduate medical students in a gynecology clinical setting. We hypothesize that the integration of feedback with reflection significantly improves higher-order MCQ scores among undergraduate medical students compared to reflection alone, fostering deeper learning and better clinical reasoning. The findings of this study are particularly relevant for curriculum designers and educators in medical and health professions education. These findings will contribute to the growing body of evidence supporting the integration of feedback into reflective practices to enhance learning outcomes in medical education. Despite the recognized importance of reflection and feedback in medical education globally, their combined implementation remains limited. Most studies on these strategies rely on qualitative approaches, lacking robust quantitative evidence to evaluate their effectiveness. Moreover, the use of feedback combined with reflection as a metacognitive learning strategy in undergraduate medical education particularly in clinical settings, is underexplored . This gap in evidence necessitates an investigation to determine the combined effectiveness of reflection and feedback in enhancing deep or meaningful learning. Such an evidence-based approach can provide critical insights for improving clinical education in resource-limited settings. This study is grounded in Self-Regulated Learning Theory, which highlights learners’ active monitoring, evaluation, and regulation of their learning for improved outcomes . Reflection enables critical analysis of experiences, while feedback addresses knowledge gaps and guides future learning strategies. Together, these iterative mechanisms promote structured engagement and enhanced learning as illustrated in Fig. . Rooted in constructivism and pragmatism, the framework views reflection as a tool for deeper learning and feedback as a practical means to refine learning approaches. Does the integration of feedback with reflection significantly enhance meaningful learning, as measured by higher-order MCQ scores, among undergraduate medical students compared to reflection alone? To evaluate the impact of feedback-integrated reflection versus reflection alone on higher-order MCQ scores among undergraduate medical students in a gynecology clinical setting. We hypothesize that the integration of feedback with reflection significantly improves higher-order MCQ scores among undergraduate medical students compared to reflection alone, fostering deeper learning and better clinical reasoning. The findings of this study are particularly relevant for curriculum designers and educators in medical and health professions education. These findings will contribute to the growing body of evidence supporting the integration of feedback into reflective practices to enhance learning outcomes in medical education. Study design This study employed an experimental study design to determine the impact of combining feedback with reflection versus reflection alone on higher-order MCQ scores, representing learning outcomes, among undergraduate medical students in a clinical gynecology setting. Ethical approval for the study was obtained from the Institutional Review Committees of Riphah University and Rawalpindi Medical University (App #Riphah/IRC/23/3034; Ref #341/IREF/RMU/2023). Study setting The study was conducted at Rawalpindi Medical University during an 8-week clinical rotation in gynecology and obstetrics, which included ward, outpatient, and operation theater activities. The structured intervention occurred over six consecutive days as part of this rotation. Sample/participants Participants included fifth-year undergraduate medical students from Rawalpindi Medical University. Students were recruited after providing informed consent. Participation in the study was voluntary, and it was explicitly stated that the study results would not influence students’ academic grades. A total sample size of 68 students (34 per group) was determined using the G*Power sample size calculator (effect size = 0.7, α = 0.05, power = 0.80), based on similar previous studies. All participants attended a refresher training session on reflective writing, which revisited Gibbs Reflective Cycle. A simple randomization method was used to assign students into two groups: Study Group (feedback + reflection): 34 participants. Control Group (reflection only): 34 participants. This process was done manually by assigning each participant a number written on identical slips of paper, which were folded and placed into an opaque container. Slips were drawn one at a time, alternately assigning participants to the study group and the control group. The process was overseen by an independent individual to minimize selection bias. Procedure All participants attended a refresher training session on reflective writing, which revisited Gibbs Reflective Cycle. After randomized assignment of students into two groups, baseline equivalence of knowledge between the two groups was confirmed through a pre-test. Both groups participated in identical teaching sessions over six days, facilitated by the same instructor. These sessions included ward-based learning and interactive small group discussions, focusing on six clinical cases—one case per day (antepartum hemorrhage, postpartum hemorrhage, pregnancy-induced hypertension, gestational diabetes mellitus, anemia, and intrauterine growth restriction). Each case was approached holistically, encompassing history taking, physical examination, diagnostic investigations, differential diagnosis, and management strategies. At the end of each session both groups submitted written reflections based on Gibbs Reflective Cycle, guided by pre-designed prompts to facilitate structured reflections. These prompts included. What have you learned from the clinical ward class? What are your thoughts and feelings about the topic? What were you thinking during the topic discussion? Any previous experience of the situation/topic? How did this session help you in making differential diagnoses? Were there any ambiguous or difficult to understand concepts. What is your plan to improve learning based on this experience? These prompts encouraged participants to evaluate their learning, identify challenges, and plan improvements. Intervention Verbal feedback was given by the same facilitator to the study group after each reflective activity. The facilitator read the students’ reflections, identified and listed key points, both positive for re-enforcement and areas of concern, highlighted through their reflections. Verbal feedback was provided by the same facilitator, based on these key points, the following day before the start of the next activity. The duration was up to 1 h. Feedback was provided in a small group setting, focusing specifically on these identified points. Utilizing the “Ask-Tell-Ask” model, the facilitator first posed questions to the respective students to gauge their understanding and perspectives on their reflections (“Ask”). After listening to their responses, the facilitator offered targeted feedback and insights based on the reflections (“Tell”). For example, a student who struggled with the diagnosis of preeclampsia received feedback as, for instance if a student reflect that he/she is unable to understand pathophysiology behind fetal complications cause by hypertension. Teacher can give feedback by making a link of hypertension with placental blood circulation (which is compromise in hypertensive patient). Finally, the facilitator asked follow-up questions to encourage further exploration and ensure students could articulate how they would apply the feedback to their future reflective practices (“Ask”). This allowed for a more immediate and relevant discussion of the reflections . This structured approach not only personalized the feedback but also made it feasible for facilitators to engage with multiple students effectively within a limited timeframe. Feedback sessions lasted five minutes per student, with approximately one hour allocated per group session and were tailored to their individual needs. The control group participated in the same teaching and reflection activities but did not receive feedback on their reflections. The summary of the methodology used is given in Fig. . Data collection Pre-Test and Post-Test Scores : Quantitative data was collected using scores from the validated MCQs administered to both groups before and after the intervention. A set of 30 validated multiple-choice questions (MCQs), covering the six key topics in gynecology taught over six days, was used for evaluation. Each topic contributed 5 MCQs, aligned with higher-order cognitive levels i.e. application (C3), analysis (C4), and evaluation (C5). The MCQs were reviewed and validated by two experts in obstetrics and gynecology, one of whom had expertise in medical education. The pre-test served as a baseline measure for students’ knowledge before the study, while the post-test assessed learning outcomes following the intervention. Both tests were designed with comparable levels of difficulty to ensure consistency and reliability. Descriptive Feedback : Informal student perceptions of the feedback process were recorded but not formally analyzed in this study. Data analysis Data were analyzed using SPSS version 26 Descriptive statistics, including frequency, percentage, mean, and standard deviation, were calculated. Paired Sample T-Test was conducted to compare pre-test and post-test scores within each group to measure knowledge improvement. Independent Sample T-Test was done to compare the post-test scores between the study and control groups to evaluate the effect of feedback. A p-value < 0.05 was considered statistically significant. To evaluate the effectiveness of the intervention, normalized learning gain (NLG) was calculated for both the intervention (study) and control groups. The formula used for this calculation is: [12pt]{minimal} $$\:\:\:\:()=\:-\:-\:$$ [12pt]{minimal} $$\:\:\:--\:$$ The net learning gain was then calculated as the difference in normalized learning gains between the study and control groups to quantify the additional impact of the intervention. Control of bias Bias was minimized through randomization, consistent facilitation by a single instructor, and assurance that study grades would not impact final scores. The reflection and feedback process were also standardized, ensuring consistency and validity in the intervention. This study employed an experimental study design to determine the impact of combining feedback with reflection versus reflection alone on higher-order MCQ scores, representing learning outcomes, among undergraduate medical students in a clinical gynecology setting. Ethical approval for the study was obtained from the Institutional Review Committees of Riphah University and Rawalpindi Medical University (App #Riphah/IRC/23/3034; Ref #341/IREF/RMU/2023). The study was conducted at Rawalpindi Medical University during an 8-week clinical rotation in gynecology and obstetrics, which included ward, outpatient, and operation theater activities. The structured intervention occurred over six consecutive days as part of this rotation. Participants included fifth-year undergraduate medical students from Rawalpindi Medical University. Students were recruited after providing informed consent. Participation in the study was voluntary, and it was explicitly stated that the study results would not influence students’ academic grades. A total sample size of 68 students (34 per group) was determined using the G*Power sample size calculator (effect size = 0.7, α = 0.05, power = 0.80), based on similar previous studies. All participants attended a refresher training session on reflective writing, which revisited Gibbs Reflective Cycle. A simple randomization method was used to assign students into two groups: Study Group (feedback + reflection): 34 participants. Control Group (reflection only): 34 participants. This process was done manually by assigning each participant a number written on identical slips of paper, which were folded and placed into an opaque container. Slips were drawn one at a time, alternately assigning participants to the study group and the control group. The process was overseen by an independent individual to minimize selection bias. was used to assign students into two groups: Study Group (feedback + reflection): 34 participants. Control Group (reflection only): 34 participants. This process was done manually by assigning each participant a number written on identical slips of paper, which were folded and placed into an opaque container. Slips were drawn one at a time, alternately assigning participants to the study group and the control group. The process was overseen by an independent individual to minimize selection bias. All participants attended a refresher training session on reflective writing, which revisited Gibbs Reflective Cycle. After randomized assignment of students into two groups, baseline equivalence of knowledge between the two groups was confirmed through a pre-test. Both groups participated in identical teaching sessions over six days, facilitated by the same instructor. These sessions included ward-based learning and interactive small group discussions, focusing on six clinical cases—one case per day (antepartum hemorrhage, postpartum hemorrhage, pregnancy-induced hypertension, gestational diabetes mellitus, anemia, and intrauterine growth restriction). Each case was approached holistically, encompassing history taking, physical examination, diagnostic investigations, differential diagnosis, and management strategies. At the end of each session both groups submitted written reflections based on Gibbs Reflective Cycle, guided by pre-designed prompts to facilitate structured reflections. These prompts included. What have you learned from the clinical ward class? What are your thoughts and feelings about the topic? What were you thinking during the topic discussion? Any previous experience of the situation/topic? How did this session help you in making differential diagnoses? Were there any ambiguous or difficult to understand concepts. What is your plan to improve learning based on this experience? These prompts encouraged participants to evaluate their learning, identify challenges, and plan improvements. Verbal feedback was given by the same facilitator to the study group after each reflective activity. The facilitator read the students’ reflections, identified and listed key points, both positive for re-enforcement and areas of concern, highlighted through their reflections. Verbal feedback was provided by the same facilitator, based on these key points, the following day before the start of the next activity. The duration was up to 1 h. Feedback was provided in a small group setting, focusing specifically on these identified points. Utilizing the “Ask-Tell-Ask” model, the facilitator first posed questions to the respective students to gauge their understanding and perspectives on their reflections (“Ask”). After listening to their responses, the facilitator offered targeted feedback and insights based on the reflections (“Tell”). For example, a student who struggled with the diagnosis of preeclampsia received feedback as, for instance if a student reflect that he/she is unable to understand pathophysiology behind fetal complications cause by hypertension. Teacher can give feedback by making a link of hypertension with placental blood circulation (which is compromise in hypertensive patient). Finally, the facilitator asked follow-up questions to encourage further exploration and ensure students could articulate how they would apply the feedback to their future reflective practices (“Ask”). This allowed for a more immediate and relevant discussion of the reflections . This structured approach not only personalized the feedback but also made it feasible for facilitators to engage with multiple students effectively within a limited timeframe. Feedback sessions lasted five minutes per student, with approximately one hour allocated per group session and were tailored to their individual needs. The control group participated in the same teaching and reflection activities but did not receive feedback on their reflections. The summary of the methodology used is given in Fig. . Pre-Test and Post-Test Scores : Quantitative data was collected using scores from the validated MCQs administered to both groups before and after the intervention. A set of 30 validated multiple-choice questions (MCQs), covering the six key topics in gynecology taught over six days, was used for evaluation. Each topic contributed 5 MCQs, aligned with higher-order cognitive levels i.e. application (C3), analysis (C4), and evaluation (C5). The MCQs were reviewed and validated by two experts in obstetrics and gynecology, one of whom had expertise in medical education. The pre-test served as a baseline measure for students’ knowledge before the study, while the post-test assessed learning outcomes following the intervention. Both tests were designed with comparable levels of difficulty to ensure consistency and reliability. Descriptive Feedback : Informal student perceptions of the feedback process were recorded but not formally analyzed in this study. Data were analyzed using SPSS version 26 Descriptive statistics, including frequency, percentage, mean, and standard deviation, were calculated. Paired Sample T-Test was conducted to compare pre-test and post-test scores within each group to measure knowledge improvement. Independent Sample T-Test was done to compare the post-test scores between the study and control groups to evaluate the effect of feedback. A p-value < 0.05 was considered statistically significant. To evaluate the effectiveness of the intervention, normalized learning gain (NLG) was calculated for both the intervention (study) and control groups. The formula used for this calculation is: [12pt]{minimal} $$\:\:\:\:()=\:-\:-\:$$ [12pt]{minimal} $$\:\:\:--\:$$ The net learning gain was then calculated as the difference in normalized learning gains between the study and control groups to quantify the additional impact of the intervention. Descriptive statistics, including frequency, percentage, mean, and standard deviation, were calculated. Paired Sample T-Test was conducted to compare pre-test and post-test scores within each group to measure knowledge improvement. Independent Sample T-Test was done to compare the post-test scores between the study and control groups to evaluate the effect of feedback. A p-value < 0.05 was considered statistically significant. To evaluate the effectiveness of the intervention, normalized learning gain (NLG) was calculated for both the intervention (study) and control groups. The formula used for this calculation is: [12pt]{minimal} $$\:\:\:\:()=\:-\:-\:$$ [12pt]{minimal} $$\:\:\:--\:$$ The net learning gain was then calculated as the difference in normalized learning gains between the study and control groups to quantify the additional impact of the intervention. Bias was minimized through randomization, consistent facilitation by a single instructor, and assurance that study grades would not impact final scores. The reflection and feedback process were also standardized, ensuring consistency and validity in the intervention. This randomized controlled trial included 68 final-year medical students of either gender. Gender distribution between the control and study groups showed no statistically significant difference (M: F ratio; M = 22, F = 44, P = 0.380). Pre-test scores Baseline knowledge was assessed through pre-test scores before the intervention. The study group had a mean pre-test score of 11.68 ± 2.60 (38.93%), while the control group scored 11.29 ± 2.38 (37.15%). There was no statistically significant difference in baseline knowledge between the two groups ( P = 0.52, independent sample t-test). Comparison between pre and post-test scores of each group Within-group comparisons using paired sample t-tests showed significant improvements in post-test scores for both groups ( P = 0.0001) as shown in Table . Comparison between Post-test scores Post-test scores, conducted on the 7th day after the intervention, demonstrated a significant difference between the two groups. The study group scored a mean of 20.88 ± 2.98 (69.32%), compared to the control group, which scored a mean of 15.29 ± 2.66 (51.00%). This difference was statistically significant ( P = 0.0001, independent sample t-test), as shown in Table . Learning gain The difference in learning gain, as evidenced by mean scores of the study and control groups is shown in Fig. . The percentage gain in learning was calculated for both groups to evaluate the effectiveness of the intervention. The control group, which engaged in reflection alone, demonstrated a percentage gain of 35.43% from pre-test to post-test scores. In comparison, the study group, which received feedback integrated with reflection, achieved a significantly higher percentage gain of 78.77% . Normalized Learning Gain The normalized learning gain (NLG) was calculated to compare the effectiveness of the intervention (feedback-integrated reflection) with that of the control (reflection alone). The study group demonstrated a mean normalized learning gain of 69.07%, compared to 29.18% in the control group. Net learning gain The net learning gain, calculated as the difference in normalized learning gains between the study and control groups, was found to be 39.89%. Baseline knowledge was assessed through pre-test scores before the intervention. The study group had a mean pre-test score of 11.68 ± 2.60 (38.93%), while the control group scored 11.29 ± 2.38 (37.15%). There was no statistically significant difference in baseline knowledge between the two groups ( P = 0.52, independent sample t-test). Within-group comparisons using paired sample t-tests showed significant improvements in post-test scores for both groups ( P = 0.0001) as shown in Table . Post-test scores, conducted on the 7th day after the intervention, demonstrated a significant difference between the two groups. The study group scored a mean of 20.88 ± 2.98 (69.32%), compared to the control group, which scored a mean of 15.29 ± 2.66 (51.00%). This difference was statistically significant ( P = 0.0001, independent sample t-test), as shown in Table . The difference in learning gain, as evidenced by mean scores of the study and control groups is shown in Fig. . The percentage gain in learning was calculated for both groups to evaluate the effectiveness of the intervention. The control group, which engaged in reflection alone, demonstrated a percentage gain of 35.43% from pre-test to post-test scores. In comparison, the study group, which received feedback integrated with reflection, achieved a significantly higher percentage gain of 78.77% . The normalized learning gain (NLG) was calculated to compare the effectiveness of the intervention (feedback-integrated reflection) with that of the control (reflection alone). The study group demonstrated a mean normalized learning gain of 69.07%, compared to 29.18% in the control group. The net learning gain, calculated as the difference in normalized learning gains between the study and control groups, was found to be 39.89%. This randomized controlled study assessed the effectiveness of integrating feedback with reflection compared to reflection alone in enhancing deep learning among undergraduate medical students based on their post-intervention MCQ scores. The findings demonstrate that feedback-integrated reflection significantly improved the post-test scores in the study group suggesting improved meaningful or higher-order learning. Unlike previous studies that relied on qualitative measures or peer feedback, this study employed a quantitative approach to measure the impact of metacognitive support on student scores by employing individualized, instructor-led feedback on written reflections. The structured feedback process ensured consistency and provided actionable guidance, resulting in significant learning gains. This approach aligns with the principles of self-regulated learning, where feedback serves as a scaffold, guiding learners to set goals, monitor progress, and adjust strategies to achieve desired outcomes . Research indicates that feedback during the learning process promotes metacognitive engagement. Learners need feedback to focus on valid indicators of competency development and remain motivated to reflect on learning. Additionally, they require awareness of the value of metacognitive activities and the autonomy to shape their own learning paths . The study’s results are consistent with earlier findings in the literature demonstrated that practicing clinical cases with integrated feedback and reflection led to improved diagnostic accuracy among dermatology trainees . Similarly, reflective practice combined with feedback was found to be an effective approach for helping fourth-year medical students enhance their reflective skills and deepen their medical knowledge and skills in a Thai study . Another study by Larsen et al. showed that reflection followed by feedback enhanced students’ history-taking skills, medical knowledge, and reflective writing abilities . These findings reinforce the conclusion that feedback, when integrated with reflective practices, plays a pivotal role in improving medical students’ learning outcomes . By addressing gaps in performance that reflection alone may not identify, feedback-integrated reflection has proven to improve self-regulation, deepen understanding, and enhance clinical competency across various healthcare settings. According to two studies conducted in Saudi Arabia and Taiwan, the findings revealed a notable and meaningful association between reflection performance and critical thinking disposition. Furthermore, the study demonstrated that reflection performance could effectively predict the variability in critical thinking disposition. These results suggest that nursing students who actively engage in reflective practices are more likely to possess stronger critical thinking abilities . The findings are strongly supported by Self-Regulated Learning (SRL) Theory, which provides a theoretical foundation for understanding the mechanisms through which feedback enhances learning. In our study, integrating feedback with reflection during the learning process provided a scaffold for self-regulated learning, enabling students to actively engage in their educational journey. Reflection, as a metacognitive tool, enables learners to critically evaluate their experiences, identify areas for improvement, and formulate strategies to address knowledge gaps. Feedback plays a complementary role by guiding learners to refine their reflective process, providing targeted insights into their strengths and weaknesses . This iterative cycle of reflection and feedback allows students to adjust their learning strategies, fostering self-regulation, deeper understanding, and critical thinking . In this study, students who received feedback integrated with reflection demonstrated enhanced metacognitive engagement, as reflected in their significantly higher post-test scores compared to those who engaged in reflection alone. This approach not only improved immediate learning outcomes but also equipped students with metacognitive learning skills essential for medical practice . This study has multiple strengths, one of which is its acknowledgment and mitigation of potential confounders. Variability in baseline knowledge levels was addressed through pre-testing and randomization, ensuring balanced groups. Instructor variability was minimized by having a single facilitator conduct all teaching and feedback sessions. Motivation and engagement levels, which could independently influence outcomes, were controlled by standardizing the learning environment and reassuring students that participation would not impact their final grades. Peer interactions, a potential confounding factor, were minimized by structuring sessions to focus on individual reflections and feedback. Additionally, variability in the quality of written reflections was reduced by providing participants with standardized training in reflective writing prior to the intervention. These measures ensured that the observed differences in learning outcomes could be reliably attributed to the intervention itself. Furthermore, the intervention’s adaptability across disciplines and resource-constrained settings highlights its scalability. The results not only confirm the effectiveness of the intervention but also provide actionable strategies for educators to enhance teaching and learning practices. In conclusion, this study demonstrates that feedback-integrated reflection is a powerful tool for enhancing meaningful learning among medical students compared to reflection alone. The intervention’s success in improving learning outcomes supports its inclusion as a core component of educational strategies in medical and health professions education. Limitations The study also has a few limitations. Its short duration may not reflect the long-term effects of feedback-integrated reflection on retention or clinical application, and the focus on a limited number of gynecology cases restricts generalizability. While feedback was standardized, its subjective nature could introduce variability. The study assessed only cognitive outcomes, excluding practical skills and professionalism, and lacked longitudinal follow-up to evaluate the sustainability of learning gains. Contributions and future implications By demonstrating the significant improvement in higher-order cognitive skills among students who received feedback-integrated reflection, the findings highlight the added value of feedback in fostering self-regulation, critical thinking, and clinical reasoning. Future research may explore the application of this approach across diverse clinical disciplines and include student perceptions to provide a more comprehensive understanding of its impact. The study also has a few limitations. Its short duration may not reflect the long-term effects of feedback-integrated reflection on retention or clinical application, and the focus on a limited number of gynecology cases restricts generalizability. While feedback was standardized, its subjective nature could introduce variability. The study assessed only cognitive outcomes, excluding practical skills and professionalism, and lacked longitudinal follow-up to evaluate the sustainability of learning gains. By demonstrating the significant improvement in higher-order cognitive skills among students who received feedback-integrated reflection, the findings highlight the added value of feedback in fostering self-regulation, critical thinking, and clinical reasoning. Future research may explore the application of this approach across diverse clinical disciplines and include student perceptions to provide a more comprehensive understanding of its impact. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 |
A Data‐Driven Approach Identifies Subtypes of Caries From Dental Charting | a6aa8ff6-e7df-4a7a-bd9d-a037102bacb4 | 11754153 | Dentistry[mh] | Introduction Complex diseases are influenced by a range of underlying susceptibility and protective factors such as host genetics, social and behavioural characteristics and treatment. The interplay between these factors may contribute towards variation in the clinical presentation of disease, and may give rise to the existence of disease subtypes. There is growing interest in searching for these subtypes in complex disorders including cardiometabolic diseases, cancer, depression and Alzheimer's disease . The ability to identify groups of people with severe or high risk subtypes of disease is potentially useful in a range of scenarios including targeted public health strategies and routine clinical care, as well as helping to provide more personalised and specific counselling and treatment under the precision medicine concept . Dental caries is among the most prevalent, treatment‐demanding, yet preventable, diseases worldwide . It arises from complex interactions of genetic, biological, behavioural, and environmental factors, where dietary intake of free sugars is a key component , but is also recognised as a disease of social deprivation . It presents with substantial variation in clinical manifestation. Despite counselling on oral hygiene, dietary risk factors and fluoride use, 15%–20% of children and adults remain with significant disease activity even in high‐income countries . There is evidence from data‐driven hierarchical clustering (HCA) of tooth surface status showing that tooth surfaces fall into clusters and these are influenced by different genetic factors , suggesting different underlying biology. Similarly, latent class analysis (LCA) has identified subgroups of children with early childhood caries, revealing distinct patterns of disease and tooth microbiota profiles , as well as differing caries trajectories . Collectively, the complexity of the upstream determinants of caries as well as the available data from children and genetic investigations suggest that the conditions exist under which there are likely to be different subtypes of caries in adults. To date, there has been little work applying LCA to investigate subtypes of caries in adults, and it is unclear whether LCA‐classification of caries signs can be used to identify meaningful caries subtypes. This study aimed to investigate this by applying a data‐driven approach to identify subgroups of caries from dental charting in an adult population. A necessary prerequisite for this is valid dental charting in a large adult population. Adoption of electronic charting in dental practices provides a potential solution to this problem. In Sweden, the Swedish Quality Register on Caries and Periodontitis (SKaPa, www.Skapareg.se ) is a comprehensive dental register for both children and adults . Personal identification numbers allow linkage of SKaPa‐data to other medical and demographic registers, or population‐based cohorts with screening data and biological samples, such as the Västerbotten Intervention Programme (VIP) and the Malmö Offspring Study (MOS) . While the validity of caries data in SKaPa has been confirmed for children aged 6 and 12 years , its accuracy in adults remains unevaluated. This is needed as the use of SKaPa data in studies pertaining to caries risk prediction and its association with other diseases is on the rise, both within Sweden and internationally. The study (i) evaluated the validity of dental data in adults obtained from the Swedish Quality Register on Caries and Periodontitis (SKaPa); (ii) explored whether latent classes can be identified based on caries information derived from SKaPa; and (iii) characterised dental, medical and behavioural characteristics, including longitudinal change in caries status in young adults to younger elderly, in the LCA‐derived classes.
Methods 2.1 Study Populations The study encompassed two populations in Sweden (the VIP , and the MOS ). Beginning in 1986, inhabitants of Västerbotten County in northern Sweden were invited to a health screening when they reached the age of 30 (though this age group was only included for a few years), 40, 50, and 60. Consenting participants were subsequently enrolled in the VIP cohort . As of December 2021, approximately 133 000 individuals had provided questionnaire data, with 50% undergoing two or more repeat screenings. For the present study, caries status was searched in the SKaPa register . The results from VIP were externally validated using data from the MOS cohort, located in southern Sweden . The MOS, conducted from 2013 to 2021, includes children and grandchildren of participants from the Malmö Diet and Cancer Study (MDC) . For both cohorts, participants with dental data in SKaPa and aged 18 through 69 years were included. Additionally, the validity of caries information from the SKaPa register was assessed in a dental substudy within the MOS (MODS). The project adheres to the Helsinki Declaration and General Data Protection Regulation ( GDPR ) including that all participants gave written consent when recruited to the basic cohorts and was approved by the Swedish Ethical Review Authority Dnr 2020–01416 and Dnr 2020–06560 and MOS 2012/594, MODS 2013/560. 2.2 Dental Caries Status Caries status at the tooth surface level was searched for in the SKaPa register using personal registration numbers. Dental data originated from examinations by dentists or dental hygienists in public or private dental offices in the Västerbotten (VIP) and Skåne (MOS) regions in northern and southern Sweden, respectively. For the incisor teeth, 4 surfaces were scored, and for premolar and molar teeth, 5 surfaces. Third‐molar teeth were excluded. The caries scores were defined as: D0 for untreated and clinically sound tooth surfaces, D1 for caries in the outer enamel, D2 for caries extending into the enamel‐dentin border, and D3 for caries in the dentine. Surfaces with a fissure sealant, enamel hypoplasia, fluorosis, or tooth wear were recorded as D0 and restored surfaces as D3. For missing or crown‐covered incisor 4 surfaces were scored as D3 and for premolars and molars 5 surfaces. Non‐erupted and congenitally missing teeth were imputed as caries‐free. A tooth surface was considered as caries affected if assigned as a D2 or a D3 and the sum of caries affected surfaces was aggregated to the DMFS (including decayed, missing, and filled surfaces) and DFS (including decayed, and filled surfaces) indexes. A total of 1030 adults enrolled in the MODS substudy, of whom 1024 completed a dental examination. Caries status was recorded by visual inspection, probing using a double‐ended dental explorer (Hu‐Friedy EXD57) and bite‐wing radiographs. Of these, 724 individuals (71%) had a match in the SKaPa register and were used for validation, with the SKaPa data serving as the test method and the clinical examination data serving as the reference method. 2.3 Behavioural Characteristics, Anthropometric and Medical Data For the participants in the VIP cohort, questionnaire data provided information on behavioural characteristics (smoking, snus use) and highest educational level. Questionnaire data was supplemented by anthropometric measurements (BMI [weight/height 2 ], wdaist circumference) and laboratory tests (triglycerides, total and HDL cholesterol, fasting blood glucose, 2‐h post‐glucose challenge levels, systolic and diastolic blood pressure) from clinical assessments . Implausible values for anthropometric or laboratory measures, as defined by Region Västerbotten ( https://www.umu.se/enheten‐for‐biobanksforskning/provsamlingar‐och‐register/northern‐sweden‐health‐and‐disease‐study ) were excluded. Information on dietary habits, including intake of 66 foods/food groups and estimated energy (kCal/day), macronutrients, and alcohol was available from a food frequency questionnaire (FFQ). A slightly longer version of the FFQ has been validated against repeated 24‐h recalls while the 66‐item version used in this study has been validated against biomarkers for B vitamins . A subset of the corresponding data was available for MOS. 2.4 Data Handling and Statistical Analyses Number of teeth and aggregated caries scores (DMFS) derived from SKaPa (test data) and the reference scoring were evaluated by Spearman correlation coefficients, intra‐class correlation coefficient (ICC, using a two‐way mixed model with an agreement coefficient and single measures) between the test and reference groups and by comparing mean DMFS values in quartiles from the DMFS distributions. Due to a bimodal age distribution in the MOS, these analyses were conducted separately for participants < 40 years and ≥ 40 years (see Figure ). Latent Class Analysis (LCA) was applied to explore hidden structures in the caries pattern across the 128 tooth surfaces using the poLCA package in R. A series of models with 1–9 classes were run for all participants, to identify a suitable number of classes. Model selection was based on the Akaike information criterion (AIC), Bayesian Information Criterion (BIC), and entropy values; ultimately, a five‐class model was chosen where the AIC and BIC values plateaued at a low level (Figure ). Subsequently, five‐class LCA models were run based on the first available dental recording including sex and age as covariates. LCA‐derived classes are non‐ordinal however, for ease of presentation, the class numbers were recoded to represent the lowest (code I) to the highest (code V) based on mean DMFS. For comparison, participants were also ranked into quintiles (by sex and 10‐year age group, coded Q1–Q5) based on the DMFS distribution. Dental status and phenotypical characteristics were compared across LCA and quintile groups, respectively. Categorical data are presented as numbers or percentages. Continuous data are presented as means with standard deviations (SD). Group differences were tested in ANOVA in generalised linear modelling (GLM) with sex as covariate. Medical and behavioural characteristics were compared for participants with a VIP visit within 2 years apart from the dental recording and for MOS participants from the same year. Triglyceride measures were logarithmically transformed, and energy‐providing nutrients and alcohol intake were evaluated as intakes of grams per day and their contribution to total energy intake (E%). In the five LCA and quintile classes, incident change in dental status was examined in participants with baseline and 5‐year follow‐up visits ( n = 42 540 in VIP and n = 1764 in MOS). The analyses were restricted to participants with a reported DMFS difference ≥ 0. Tests for difference in 5‐year incidence between groups were carried out using generalised linear modelling. The ability of LCA‐group and quintile group to predict incident change was compared using AIC and a relative likelihood test was used to test the null hypothesis that the LCA approach was non‐superior. Corresponding criteria and analyses were applied to the MOS data.
Study Populations The study encompassed two populations in Sweden (the VIP , and the MOS ). Beginning in 1986, inhabitants of Västerbotten County in northern Sweden were invited to a health screening when they reached the age of 30 (though this age group was only included for a few years), 40, 50, and 60. Consenting participants were subsequently enrolled in the VIP cohort . As of December 2021, approximately 133 000 individuals had provided questionnaire data, with 50% undergoing two or more repeat screenings. For the present study, caries status was searched in the SKaPa register . The results from VIP were externally validated using data from the MOS cohort, located in southern Sweden . The MOS, conducted from 2013 to 2021, includes children and grandchildren of participants from the Malmö Diet and Cancer Study (MDC) . For both cohorts, participants with dental data in SKaPa and aged 18 through 69 years were included. Additionally, the validity of caries information from the SKaPa register was assessed in a dental substudy within the MOS (MODS). The project adheres to the Helsinki Declaration and General Data Protection Regulation ( GDPR ) including that all participants gave written consent when recruited to the basic cohorts and was approved by the Swedish Ethical Review Authority Dnr 2020–01416 and Dnr 2020–06560 and MOS 2012/594, MODS 2013/560.
Dental Caries Status Caries status at the tooth surface level was searched for in the SKaPa register using personal registration numbers. Dental data originated from examinations by dentists or dental hygienists in public or private dental offices in the Västerbotten (VIP) and Skåne (MOS) regions in northern and southern Sweden, respectively. For the incisor teeth, 4 surfaces were scored, and for premolar and molar teeth, 5 surfaces. Third‐molar teeth were excluded. The caries scores were defined as: D0 for untreated and clinically sound tooth surfaces, D1 for caries in the outer enamel, D2 for caries extending into the enamel‐dentin border, and D3 for caries in the dentine. Surfaces with a fissure sealant, enamel hypoplasia, fluorosis, or tooth wear were recorded as D0 and restored surfaces as D3. For missing or crown‐covered incisor 4 surfaces were scored as D3 and for premolars and molars 5 surfaces. Non‐erupted and congenitally missing teeth were imputed as caries‐free. A tooth surface was considered as caries affected if assigned as a D2 or a D3 and the sum of caries affected surfaces was aggregated to the DMFS (including decayed, missing, and filled surfaces) and DFS (including decayed, and filled surfaces) indexes. A total of 1030 adults enrolled in the MODS substudy, of whom 1024 completed a dental examination. Caries status was recorded by visual inspection, probing using a double‐ended dental explorer (Hu‐Friedy EXD57) and bite‐wing radiographs. Of these, 724 individuals (71%) had a match in the SKaPa register and were used for validation, with the SKaPa data serving as the test method and the clinical examination data serving as the reference method.
Behavioural Characteristics, Anthropometric and Medical Data For the participants in the VIP cohort, questionnaire data provided information on behavioural characteristics (smoking, snus use) and highest educational level. Questionnaire data was supplemented by anthropometric measurements (BMI [weight/height 2 ], wdaist circumference) and laboratory tests (triglycerides, total and HDL cholesterol, fasting blood glucose, 2‐h post‐glucose challenge levels, systolic and diastolic blood pressure) from clinical assessments . Implausible values for anthropometric or laboratory measures, as defined by Region Västerbotten ( https://www.umu.se/enheten‐for‐biobanksforskning/provsamlingar‐och‐register/northern‐sweden‐health‐and‐disease‐study ) were excluded. Information on dietary habits, including intake of 66 foods/food groups and estimated energy (kCal/day), macronutrients, and alcohol was available from a food frequency questionnaire (FFQ). A slightly longer version of the FFQ has been validated against repeated 24‐h recalls while the 66‐item version used in this study has been validated against biomarkers for B vitamins . A subset of the corresponding data was available for MOS.
Data Handling and Statistical Analyses Number of teeth and aggregated caries scores (DMFS) derived from SKaPa (test data) and the reference scoring were evaluated by Spearman correlation coefficients, intra‐class correlation coefficient (ICC, using a two‐way mixed model with an agreement coefficient and single measures) between the test and reference groups and by comparing mean DMFS values in quartiles from the DMFS distributions. Due to a bimodal age distribution in the MOS, these analyses were conducted separately for participants < 40 years and ≥ 40 years (see Figure ). Latent Class Analysis (LCA) was applied to explore hidden structures in the caries pattern across the 128 tooth surfaces using the poLCA package in R. A series of models with 1–9 classes were run for all participants, to identify a suitable number of classes. Model selection was based on the Akaike information criterion (AIC), Bayesian Information Criterion (BIC), and entropy values; ultimately, a five‐class model was chosen where the AIC and BIC values plateaued at a low level (Figure ). Subsequently, five‐class LCA models were run based on the first available dental recording including sex and age as covariates. LCA‐derived classes are non‐ordinal however, for ease of presentation, the class numbers were recoded to represent the lowest (code I) to the highest (code V) based on mean DMFS. For comparison, participants were also ranked into quintiles (by sex and 10‐year age group, coded Q1–Q5) based on the DMFS distribution. Dental status and phenotypical characteristics were compared across LCA and quintile groups, respectively. Categorical data are presented as numbers or percentages. Continuous data are presented as means with standard deviations (SD). Group differences were tested in ANOVA in generalised linear modelling (GLM) with sex as covariate. Medical and behavioural characteristics were compared for participants with a VIP visit within 2 years apart from the dental recording and for MOS participants from the same year. Triglyceride measures were logarithmically transformed, and energy‐providing nutrients and alcohol intake were evaluated as intakes of grams per day and their contribution to total energy intake (E%). In the five LCA and quintile classes, incident change in dental status was examined in participants with baseline and 5‐year follow‐up visits ( n = 42 540 in VIP and n = 1764 in MOS). The analyses were restricted to participants with a reported DMFS difference ≥ 0. Tests for difference in 5‐year incidence between groups were carried out using generalised linear modelling. The ability of LCA‐group and quintile group to predict incident change was compared using AIC and a relative likelihood test was used to test the null hypothesis that the LCA approach was non‐superior. Corresponding criteria and analyses were applied to the MOS data.
Results 3.1 Study Numbers and Group Characteristics For VIP, dental records were sought for 132 970 individuals, with a match for 89 211 individuals (67.1%), of whom 10 lacked information on caries status at the tooth surface level (Figure ). Of these, 27 217 participants were excluded due to being 70 years or older, or because status for single teeth was not reported, leaving 61 984 participants for the downstream analyses. In the MOS cohort, records were sought for 5277 participants, with 2842 (53.9%) matches for caries, and 75 were excluded for the reasons mentioned above leaving 2767 participants for analyses (Figure ). The distribution of age was approximately normal in VIP, but because MOS invited the children and grandchildren of MDC participants the age distribution was bimodal in MOS (Figure ). Both cohorts had similar proportions by sex, similar proportions of never‐smokers, and similar mean BMI (Figure ). DMFS was strongly associated with age in both cohorts with similar age patterns (Figure ). 3.2 Validity of SKaPa ‐Derived Caries Information Of 1024 MODS participants in the dental substudy, 724 (71%) had SKaPa information available (Figure ). The sex and age distribution among these participants mirrored that of the larger MOS cohort, that is half being women and half men, and half below and half above 40 years of age. Scatter plots of reference data versus SKaPa‐derived DMFS‐values showed a virtually linear relation for data from the same year (Figure upper left plot), and when including data spanning from −2 to +2 years (Figure upper right plot), and with ICCs > 0.90. Time difference beyond 2 years increased discrepancies between the two methods, and ICCs fell below 0.90 (Figure lower panels). In agreement, high (≥ 0.89) Spearman correlation coefficients were seen for DMFS and number of teeth assessed by the two methods up to 2 years apart (Figure ), and mean scores in quartile‐ranked groups from SKaPa and reference DMFS distributions, respectively, did not differ (Figure ). 3.3 Characteristics of LCA Identified Classes Versus DMFS Ranked Quintiles The size of the LCA classes varied, with Class II including 18 768 participants and Class III 4021 participants (Table ). This contrasted with the more uniform numbers observed in quintile groups derived from DMFS ranking. Notably, there was some overlap in participants classified into either the lowest or highest quintiles of LCA and DMFS groups (Figure ). The distribution of participants across four 10‐year age groups showed variation by LCA class, with a notable predominance of the oldest group in LCA classes IV and V despite attempting to account for sex and age as covariates in the models (Figure ). A similar pattern between age and DMFS‐ranked quintile groups was also seen (Figure ). Mean DMFS scores varied across LCA classes, from 10.0 in Class I to an extremely high 94.4 in Class V (Figure ), while the range across quintile groups was from 11.9 to 70.6 (Figure ). The rise in DMFS scores was accompanied by a continuous reduction in the number of teeth and, except for LCA Class III, an increase in DFS‐scores (Figure ). Although the sex distribution differed among the classes and quintiles, no consistent trend was evident across these groups (Table ). Alongside differences in total caries experience, there were distinct shifts in the spatial distribution of caries‐affected surfaces when comparing LCA and quintile groups. Thus, LCA class I was defined by affected occlusal surfaces in molar regions and class II by all surfaces in the molar regions and second premolars. In LCA class III ‘caries’‐scores were concentrated on the first premolars, while the pattern of class IV appeared similar to class II but with higher mean prevalence per tooth surface. Finally, class V participants had very high caries experience in all regions, including lower incisors (Figure ). The pattern for the DMFS quintiles was similar overall, although the pattern of affected surfaces differed between Q3 and LCA class III, and the overall caries prevalence in Q5 was less extreme than in LCA class V (Figure vs. Figure ). LCA class was associated with risk markers for complex diseases, i.e., smoking, BMI, waist circumference, plasma triglycerides, blood sugar levels, and intake of total carbohydrates and sucrose (Table ). Many of these associations were also observed between risk markers and quintile group (Table ), with generally poorer model fit statistics suggesting the LCA approach generally performs better at identifying groups with distinct health and behavioural characteristics. There were some associations which were only detected using one approach. As an example, association between 2‐h blood sugar following a glucose challenge was seen with LCA class ( p = 1.8 × 10 −39 ) but not with quintile group ( p = 0.052). 3.4 Latent Class as a Predictor of Caries Increment The change in DMFS was estimated in the subset of 42 540 VIP participants with a baseline and at least five follow‐up visits (Table ). There was a U‐shaped relationship between LCA class and DMFS increment, with the fastest rate of change in LCA class V and the slowest increment in LCA class III. A similar U‐shaped relationship was seen between DMFS quintile and caries increment, with the slowest increment in Q2 and Q3. LCA group V and quintile group Q5 had the highest rates of tooth loss and lowest rates of DFS increment during follow‐up. 3.5 Replication in the MOS Cohort LCA classification was carried out de‐novo (i.e., with no prior model knowledge from VIP) among 2767 MOS participants using VIP‐filtering conditions. Baseline and 5‐year follow‐up results in MOS largely mirrored those seen in VIP (Tables and ). Thus, LCA classes I and II were large groups, and class III was the smallest with health and behavioural characteristics and number of remaining teeth deviating from the other classes, and the highest prevalence in first premolars of a condition rated D3 (Figure ). The correlation between the LCA class and DMFS‐ranked quintiles in MOS was consistent with the patterns seen in VIP. LCA class was associated more strongly than quintile group with BMI, and smoking, and LCA class III exhibited the lowest increase in DMFS scores over a period of 5 years, matching what was seen in VIP (Table ).
Study Numbers and Group Characteristics For VIP, dental records were sought for 132 970 individuals, with a match for 89 211 individuals (67.1%), of whom 10 lacked information on caries status at the tooth surface level (Figure ). Of these, 27 217 participants were excluded due to being 70 years or older, or because status for single teeth was not reported, leaving 61 984 participants for the downstream analyses. In the MOS cohort, records were sought for 5277 participants, with 2842 (53.9%) matches for caries, and 75 were excluded for the reasons mentioned above leaving 2767 participants for analyses (Figure ). The distribution of age was approximately normal in VIP, but because MOS invited the children and grandchildren of MDC participants the age distribution was bimodal in MOS (Figure ). Both cohorts had similar proportions by sex, similar proportions of never‐smokers, and similar mean BMI (Figure ). DMFS was strongly associated with age in both cohorts with similar age patterns (Figure ).
Validity of SKaPa ‐Derived Caries Information Of 1024 MODS participants in the dental substudy, 724 (71%) had SKaPa information available (Figure ). The sex and age distribution among these participants mirrored that of the larger MOS cohort, that is half being women and half men, and half below and half above 40 years of age. Scatter plots of reference data versus SKaPa‐derived DMFS‐values showed a virtually linear relation for data from the same year (Figure upper left plot), and when including data spanning from −2 to +2 years (Figure upper right plot), and with ICCs > 0.90. Time difference beyond 2 years increased discrepancies between the two methods, and ICCs fell below 0.90 (Figure lower panels). In agreement, high (≥ 0.89) Spearman correlation coefficients were seen for DMFS and number of teeth assessed by the two methods up to 2 years apart (Figure ), and mean scores in quartile‐ranked groups from SKaPa and reference DMFS distributions, respectively, did not differ (Figure ).
Characteristics of LCA Identified Classes Versus DMFS Ranked Quintiles The size of the LCA classes varied, with Class II including 18 768 participants and Class III 4021 participants (Table ). This contrasted with the more uniform numbers observed in quintile groups derived from DMFS ranking. Notably, there was some overlap in participants classified into either the lowest or highest quintiles of LCA and DMFS groups (Figure ). The distribution of participants across four 10‐year age groups showed variation by LCA class, with a notable predominance of the oldest group in LCA classes IV and V despite attempting to account for sex and age as covariates in the models (Figure ). A similar pattern between age and DMFS‐ranked quintile groups was also seen (Figure ). Mean DMFS scores varied across LCA classes, from 10.0 in Class I to an extremely high 94.4 in Class V (Figure ), while the range across quintile groups was from 11.9 to 70.6 (Figure ). The rise in DMFS scores was accompanied by a continuous reduction in the number of teeth and, except for LCA Class III, an increase in DFS‐scores (Figure ). Although the sex distribution differed among the classes and quintiles, no consistent trend was evident across these groups (Table ). Alongside differences in total caries experience, there were distinct shifts in the spatial distribution of caries‐affected surfaces when comparing LCA and quintile groups. Thus, LCA class I was defined by affected occlusal surfaces in molar regions and class II by all surfaces in the molar regions and second premolars. In LCA class III ‘caries’‐scores were concentrated on the first premolars, while the pattern of class IV appeared similar to class II but with higher mean prevalence per tooth surface. Finally, class V participants had very high caries experience in all regions, including lower incisors (Figure ). The pattern for the DMFS quintiles was similar overall, although the pattern of affected surfaces differed between Q3 and LCA class III, and the overall caries prevalence in Q5 was less extreme than in LCA class V (Figure vs. Figure ). LCA class was associated with risk markers for complex diseases, i.e., smoking, BMI, waist circumference, plasma triglycerides, blood sugar levels, and intake of total carbohydrates and sucrose (Table ). Many of these associations were also observed between risk markers and quintile group (Table ), with generally poorer model fit statistics suggesting the LCA approach generally performs better at identifying groups with distinct health and behavioural characteristics. There were some associations which were only detected using one approach. As an example, association between 2‐h blood sugar following a glucose challenge was seen with LCA class ( p = 1.8 × 10 −39 ) but not with quintile group ( p = 0.052).
Latent Class as a Predictor of Caries Increment The change in DMFS was estimated in the subset of 42 540 VIP participants with a baseline and at least five follow‐up visits (Table ). There was a U‐shaped relationship between LCA class and DMFS increment, with the fastest rate of change in LCA class V and the slowest increment in LCA class III. A similar U‐shaped relationship was seen between DMFS quintile and caries increment, with the slowest increment in Q2 and Q3. LCA group V and quintile group Q5 had the highest rates of tooth loss and lowest rates of DFS increment during follow‐up.
Replication in the MOS Cohort LCA classification was carried out de‐novo (i.e., with no prior model knowledge from VIP) among 2767 MOS participants using VIP‐filtering conditions. Baseline and 5‐year follow‐up results in MOS largely mirrored those seen in VIP (Tables and ). Thus, LCA classes I and II were large groups, and class III was the smallest with health and behavioural characteristics and number of remaining teeth deviating from the other classes, and the highest prevalence in first premolars of a condition rated D3 (Figure ). The correlation between the LCA class and DMFS‐ranked quintiles in MOS was consistent with the patterns seen in VIP. LCA class was associated more strongly than quintile group with BMI, and smoking, and LCA class III exhibited the lowest increase in DMFS scores over a period of 5 years, matching what was seen in VIP (Table ).
Discussion The study tested and confirmed the validity of caries data in a national register and used this in an LCA approach to search for subtypes of caries from dental charting in a large cohort in northern Sweden. Five classes were found, which replicates in an independent cohort in southern Sweden, and who differed in dental caries status, medical and behavioural characteristics but also age. Reflecting this age‐driven effect, the LCA classes were strongly correlated with DMFS‐based ranking groups for most, but not all classes. The five subtypes identified in the study had similar proportions and characteristics in the discovery and replication datasets. In cross‐sectional analysis, these LCA groups were associated with health and behavioural traits with generally stronger association patterns than those observed using DMFS‐ranked quintile groups. Class III emerged as a group with distinct distribution of caries‐affected surfaces (with D3‐scored surfaces enriched for first premolars), better medical and behavioural profile than nearby classes and low caries increment during follow‐up. This class may represent a group where first premolars have been extracted for orthodontic reasons. Class V emerged as a group with high caries experience at baseline, high social and behavioural risk factors and high DMFS increment and tooth loss during follow‐up. Class V had low DFS increment during follow‐up, which can be interpreted as a ceiling effect , given this group has few sound tooth surfaces available to develop new caries, similar to the effect previously reported in children . The classification system used here was based on dental charting and could potentially be applied in healthcare systems without costly biomarkers or additional examinations. One potential use case would be as an empirical way to identify groups of high risk people who may benefit from targeted intervention. The design of such an intervention is not considered in the present study, but is suggested as a topic for future research. There are natural limitations to public health interventions which only target high risk groups, since the majority of the population receive no benefit, and universal interventions tend to be most cost‐effective . If the LCA groups are not useful for public health and clinical practice, they may still be helpful to conceptualise caries in epidemiological studies. Specifically, the LCA groups tended to have stronger associations with health and behavioural characteristics compared to quintile groups, and subtleties in associations such as the U‐shaped association between subgroup and DMFS increment would not be visible in analysis using DMFS. One challenge with applying LCA in adult populations is the highly age‐determined distribution of DMFS, which naturally tends to create groups of older versus younger participants. This differs from previous LCA applications in dentistry, where the participants were from narrow age ranges . Although age adjustment was included in the LCA model training to try and reduce this effect, age effects were still highly visible in the LCA groups. This suggests that alternative methods may be needed when strongly age‐patterned traits, such as caries, are evaluated. The study verified data accuracy on the number of existing or missing teeth and DMFS‐scores from the SKaPa register, covering 50%–75% of the adult population. The SKaPa data showed excellent agreement within a two‐year window with reference assessments made in MODS. This is consistent with previous findings of high concordance in children . Concordance diminished when the interval between SKaPa and the reference data increased, which is anticipated, given that new caries or treatment will occur over time. Using SKaPa data within 2 years of exposure may provide a reasonable balance between sample size and measurement accuracy. The strength of the present study is the large population‐based derivation cohort and the inclusion of a replication cohort in a distant region of the country with a partly different social context. The main limitation is that SKaPa data does not cover all clinics in Sweden and does not capture people who do not attend a dentist, making it impossible to exclude selection bias. The study leaves some unanswered questions about the role of these LCA groups in clinical practice and public health, which is suggested as a target for future research. Alternative approaches to interrogate latent structure which account for age better are also needed.
I.J. and S.H. initiated and designed the study. N.F., S.H., I.J., L.K., A.E. contributed data analyses and illustrations. D.J. is the co‐initiator and lead designer of MODS. D.J. and P.P. provided and compiled clinical registration data used for the validation section. S.H., I.J. and L.K. drafted the manuscript, and all authors contributed to and approved of the final version.
The authors declare no conflicts of interest.
Appendix S1. Figure S1. Bar and line plots demonstrating decreasing AIC and BIC values across (A) nine LCA classes in VIP and (B) seven classes in MOS. Due to its size, the MOS cohort cannot be divided beyond seven classes. The models include sex and age as covariates. Figure S2. Panel of odontograms illustrating mean caries prevalence per tooth surface for (A) the five LCA classes (class I–V) and (B) the five quintiles based on the DMFS distribution (Q1–Q5) in the MOS cohort. White indicates caries‐free (0) and dark red that all surfaces are caries‐affected (1). Table S1. Baseline dental status and medical phenotype characteristics by (A) LCA class, and (B) quintile ranking in sex and 10‐year age strata in VIP participants. (C) AIC values from LCA or Quintile rank group models, their ratio and p ‐value for non‐superiority test. Table S2. Dental status and medical phenotype characteristics by (A) LCA class, and (B) quintile ranking in sex and 10‐year age strata in VIP participants with 5‐year follow‐up data. (C) AIC values from LCA or quintile rank group models, their ratio and p ‐value for non‐superiority test. Table S3. Baseline dental status and medical phenotype characteristics by (A) LCA class, and (B) quintile ranking in sex and 10‐year age strata in MOS participants. (C) AIV values from LCA or Quintile rank group models, their ratio, and p ‐value for non‐superiority test. Table S4. Dental status and medical phenotype characteristics by (A) LCA class and (B) quintile ranking in MOS participants with 5‐year follow‐up data. (C) AIC values from LCA or quintile rank group models, their ratio and p ‐value for non‐superiority test.
|
Visualization OPLS class models of GC-MS-based metabolomics data for identifying agarwood essential oil extracted by hydro-distillation | c3e4cbf8-79f8-4bc8-8ca8-bdaedac36c21 | 11825658 | Biochemistry[mh] | The indigenous Chen-Xiang of China is a resinous part of Aquilaria sinensis (Lour.) Gilg in the Thymelaeaceae family. It is mainly produced in Guangdong, Guangxi, Hainan, and Fujian provinces . The genus Aquilaria consists of more than 21 species, of which all species of Aquilaria and Gyrinops appear in the Appendix II list of the Convention on International Trade in Endangered Species of Wild Fauna and Flora , since 2004 (Amendments to appendices I and II of CITES, 2004). Aquilaria malaccensis ( A. agallocha Roxb.) is mainly produced in Malaysia, India, Laos, Cambodia, Thailand, and Taiwan island . In Vietnam, Aquilaria crassna (Kỳ Nam, Trầm Hương, Dó Bầu) is the most important variety and is also widely distributed in Cambodia and Thailand . Agarwood is widely harvested extensively to obtain aromatic oils through a distillation process. The oils have traditionally been used in perfumes in the Middle East and have been widely used in advanced perfumes, toiletries, fragrance additives, and other biotechnology products . The oleoresin component only exists in withered and dying Aquilaria trees. In recent years, due to the increasing demand and commercial value of agarwood, the trade in agarwood has intensified, leading to the destruction of natural agarwood forests. As a crucial aspect of aromatherapy, the extraction methods of essential oils have a significant impact on their components. According to the definition of International Organization for Standardization (ISO) (ISO/D1S9235.2), essential oil is a product made by water or steam distillation, citrus peel machining or natural materials. In addition, the methods of CO 2 supercritical extraction, subcritical fluid extraction or organic solvent extraction are also called agarwood extracts or agarwood oils . The main components are sesquiterpenes, phenylethyl chromone derivatives, and aromatic compounds , . Chromone components can be used as important indicator components for the quality evaluation and identification of agarwood. It is generally believed that essential oils with a high content of chromone compounds have better quality , . Among the methods of extracting AEOs, steam distillation is also common. The extraction device is simple, the product is natural and pollution-free, but only low-boiling-point components can be extracted, and characteristic components such as chromones are lost, resulting in a low yield of essential oil . Different extraction methods, plant species, and locational areas of agarwood oils lead to difference in market price performance, which makes the essential oils adulterate . Research on AEOs mainly focuses on the identification of chemical components, but comparison of compounds is not enough to support the establishment of a quality evaluation system for agarwood . Metabolomic data usually include a wide range of dynamic changes in metabolite concentrations, due to the geographical phenology and processing methods of plant varieties and production areas. Although the data obtained by GC-MS can be compared with the common components of AEOs through sampling from a wide range of sources, these common components of AEOs still cannot be used as indicators for statistical verification. Our work focused on seven regions of AEOs by hydro-distillation from different habitats and species. Different chemical constituents were identified by GC-MS fingerprint and multivariate statistical analysis, including partial least squares-discriminant analysis (PLS-DA), orthogonal partial least squares-discriminant analysis (OPLC-DA) and SPSS cluster analysis , . This multivariate statistical method is used to handle the complex data generated by GC-MS. It helps in identifying characteristic chemical markers and distinguishing samples from different regions and species – . The methods together provide a comprehensive approach to analyzing AEOs, ensuring accurate identification and differentiation of their chemical components. These components were identified by using NIST general database retrieval and literature review, providing reference for the overall quality evaluation of AEOs. GC-MS analysis of volatile components and common chemical compounds According to the GC-MS standard mass spectrometry database NIST2020, the volatile components of AEO samples from different habitats and 3 species were analyzed, as shown in Table . The 2-(2-phenylethyl) chromone compounds were determined based on the mass spectrometry characteristics and fragmentation patterns summarized in literature , combined with the characteristics of ion fragments in this study. A total of 127 compounds with more than 85% similarity were identified from the essential oils of 7 regions, accounting for 28.6–74.6% of the total volatile components (Table )(Supplement file Figure ). According to different chemical structures, these volatile components are classified into sesquiterpenes (0.3–70.4%), aromatic compounds (0.6–24.9%), aliphatic compounds (0–8.4%), 2-(2-phenylethyl) chromones (0–12.0%) and others (0.3–7.6%) (Fig. ). Sesquiterpenes had a total of 73 components, with the most abundant being sesquiterpenes. Aromatic groups followed, consisting of 12 components in total. Aliphatic groups and 2-(2-phenylethyl) chromones were the least, with 10 and 4 components, respectively. In the AEOs extracted by hydro-distillation, due to the high temperature of water vapor in the extraction process, or the influence of the solvent used in the analysis, the 2-(2-phenylethyl) chromone component often did not appear . However, the AEOs obtained by supercritical fluid extraction and microwave-assisted extraction usually contain Flidersiachromone, 6-methoxy-2-(2-phenylethyl) chromone, 4 H-1-benzopyran-4-one chromone, and a few other semi-volatile 2-(2-phenylethyl) chromones , . In this study, more aromatic components could be extracted from hydro-distillation AEO samples by pretreatment with 95% ethanol in GC analysis, and excellent chromone components could be detected . Low molecular weight aromatic compounds are important components of AEOs, and they are frequently regarded as the primary source of aroma in AEOs. More aromatic compounds were detected in the resinous agarwood, were absent from the non-resinous parts and confirmed as characteristic of the resinous parts . AEOs contained abundant fatty acids, possibly affecting the complex process of resin accumulation, prolonging the accumulation time, and resulting in a longer formation time being required for agarwood oil yield . Based on the aliphatic relative content of samples (Fig. ), these were ranked as S3 Taiwan (8.38%) > S2 Hainan (2.34%) > S7 Cambodia (1.71%) > S1 Guangxi (1.64%) > S6 Vietnam B (0.68%) > S5 Vietnam A (0.15%) > S4 Malaysia (0%). We also considered in practical operations that the lower the fatty acid content, the more important it was to evaluate the quality of agarwood. The phytocomplexity of the AEOs signifies the production of a multitude of plant–fungus mediated secondary metabolites as chemical signals for natural ecological communication. Table shows an aromatic compound, 4-phenyl-2-butanone, as the only common component. A similar component, 3-phenyl-2-butanone, also appeared in hydro-distilled essential oil of A malaccensis and A. sub-integra from Malaysia, Thailand, and Cambodia . This common 4-phenyl-2-butanone was presented in A. malaccensis represents an important basis for plant–fungal metabolic analysis chemistry in wild plants and in vitro plantlets , . We consider that the S2 sample should contain a small amount or no agarwood, as most of its heartwood was directly extracted by hydro-distillation or might have been added with unknown chemical essences. Therefore, six common components could be detected in the other 6 AEOs except the S2 sample, namely, 1,1,4,5,6-pentamethyl-2,3-dihydro-1H-indene (aromatic compound), viridiflorol (sesquiterpenoid compound), bis(2-ethylhexyl)phthalate (aromatic compound, plasticizer), in addition to 5-(2-methylpropyl)-nonane and 2,6,10-trimethyl-dodecane (others). One terpenoid of particular interest is viridiflorol, a known common fragrance molecule of agarwood . Viridiflorol has shown moderate antibacterial activity against Mycobacterium tuberculosis , the causative agent of tuberculosis, in an in vitro assay. It is also produced by the endophytic root fungus Serendipita indica and exhibits antifungal activity against Colletotrichum truncatum . It was particularly surprising that bis(2-ethylhexyl) phthalate, diethyl phthalate, and dibutyl phthalate were detected in this study (Table ). These components are often used as plasticizers, condensing agents, anti-wear agents, and gas chromatographic stationary liquids for polyvinyl chloride resins. The plasticizers are mixed with some food oils to reduce product costs and should not be used as effective components of AEOs. Studies have shown that excessive intake of these plasticizers can have adverse effects on human reproduction, development, and the cardiovascular system . The total content of plasticizer added in the S4 sample accounted for about 23.9%. The quality of S4 essential oil was poor, and some samples were also detected but low, which might be due to the accumulation of plants themselves, or pollution caused by GC analysis . Sesquiterpenoids are natural terpenoids containing 15 carbon atoms in a molecule composed of three isoprene units . In addition to the viridiflorol, by comparing various samples, we have identified the following 6 common components that were worth noting: elemol (sesquiterpene),γ-eudesmol (sesquiterpene), –aristolene (sesquiterpene), agarospirol (sesquiterpene), 2(3H)-naphthalenone, 4, 4a, 5,6,7,8-hexahydro-4a, 5-dimethyl-3- (1-methyllidene)-, (4ar cis)- (sesquiterpene) and 2-phenyl-4 H-chromen-4-one (chromone derivative) (Table ). According to modern pharmacology, sesquiterpene components of agarwood have good biological activity in the central nervous system, respiratory system, and digestive system, etc. Elemol is a natural product that sesquiterpenoid has a role as a fragrance, showing modest antioxidant, anti-inflammatory, and antiproliferative activities of the essential oil of Cymbopogon nardus , . Plants with aromatic properties have multiple chemical components in their essential oils, such as the main component of Blepharocalyx salicifolius , which was the viridiflorol and eudesmane sesquiterpenes , . In the fungus-mediated fermentation of resinous agarwood, the most significant finding was the appearance of key agarwood sesquiterpenes such as agarospirol, γ-eudesmol, (−)-aristolene . A sesquiterpenoid 2-(3H)-naphthalenone, 4, 4a, 5, 6, 7, 8-hexahydro-4a, 5-dimethyl-3- (1-methylidene)-, (4ar cis)-, which has not been found in other agarwood literatures. However, it showed higher performance in the comparison of common components in this study, and its relative content was higher than other components. One of the main active components of agarwood, chromone, has been isolated and found to have 240 different subunits. It has anti-inflammatory and anti-tumor properties, neuroprotective effects, and inhibitory effects on acetylcholinesterase, tyrosinase, and glucosidase . It is worth noting that 2-(2-phenylethyl) chromones often do not appear in the essential oil extracted by hydro-distillation. However, among the four chromones analyzed in this study, the common component, 2-phenyl-4H-chromen-4-one, was presented in the 5 regions, and this chromone component has not been found in other literatures. There were significant differences in the relative content of each component, which might lead to differences in special flavors of agarwood from different habitats. According to the GC-MS standard mass spectrometry database NIST2020, the volatile components of AEO samples from different habitats and 3 species were analyzed, as shown in Table . The 2-(2-phenylethyl) chromone compounds were determined based on the mass spectrometry characteristics and fragmentation patterns summarized in literature , combined with the characteristics of ion fragments in this study. A total of 127 compounds with more than 85% similarity were identified from the essential oils of 7 regions, accounting for 28.6–74.6% of the total volatile components (Table )(Supplement file Figure ). According to different chemical structures, these volatile components are classified into sesquiterpenes (0.3–70.4%), aromatic compounds (0.6–24.9%), aliphatic compounds (0–8.4%), 2-(2-phenylethyl) chromones (0–12.0%) and others (0.3–7.6%) (Fig. ). Sesquiterpenes had a total of 73 components, with the most abundant being sesquiterpenes. Aromatic groups followed, consisting of 12 components in total. Aliphatic groups and 2-(2-phenylethyl) chromones were the least, with 10 and 4 components, respectively. In the AEOs extracted by hydro-distillation, due to the high temperature of water vapor in the extraction process, or the influence of the solvent used in the analysis, the 2-(2-phenylethyl) chromone component often did not appear . However, the AEOs obtained by supercritical fluid extraction and microwave-assisted extraction usually contain Flidersiachromone, 6-methoxy-2-(2-phenylethyl) chromone, 4 H-1-benzopyran-4-one chromone, and a few other semi-volatile 2-(2-phenylethyl) chromones , . In this study, more aromatic components could be extracted from hydro-distillation AEO samples by pretreatment with 95% ethanol in GC analysis, and excellent chromone components could be detected . Low molecular weight aromatic compounds are important components of AEOs, and they are frequently regarded as the primary source of aroma in AEOs. More aromatic compounds were detected in the resinous agarwood, were absent from the non-resinous parts and confirmed as characteristic of the resinous parts . AEOs contained abundant fatty acids, possibly affecting the complex process of resin accumulation, prolonging the accumulation time, and resulting in a longer formation time being required for agarwood oil yield . Based on the aliphatic relative content of samples (Fig. ), these were ranked as S3 Taiwan (8.38%) > S2 Hainan (2.34%) > S7 Cambodia (1.71%) > S1 Guangxi (1.64%) > S6 Vietnam B (0.68%) > S5 Vietnam A (0.15%) > S4 Malaysia (0%). We also considered in practical operations that the lower the fatty acid content, the more important it was to evaluate the quality of agarwood. The phytocomplexity of the AEOs signifies the production of a multitude of plant–fungus mediated secondary metabolites as chemical signals for natural ecological communication. Table shows an aromatic compound, 4-phenyl-2-butanone, as the only common component. A similar component, 3-phenyl-2-butanone, also appeared in hydro-distilled essential oil of A malaccensis and A. sub-integra from Malaysia, Thailand, and Cambodia . This common 4-phenyl-2-butanone was presented in A. malaccensis represents an important basis for plant–fungal metabolic analysis chemistry in wild plants and in vitro plantlets , . We consider that the S2 sample should contain a small amount or no agarwood, as most of its heartwood was directly extracted by hydro-distillation or might have been added with unknown chemical essences. Therefore, six common components could be detected in the other 6 AEOs except the S2 sample, namely, 1,1,4,5,6-pentamethyl-2,3-dihydro-1H-indene (aromatic compound), viridiflorol (sesquiterpenoid compound), bis(2-ethylhexyl)phthalate (aromatic compound, plasticizer), in addition to 5-(2-methylpropyl)-nonane and 2,6,10-trimethyl-dodecane (others). One terpenoid of particular interest is viridiflorol, a known common fragrance molecule of agarwood . Viridiflorol has shown moderate antibacterial activity against Mycobacterium tuberculosis , the causative agent of tuberculosis, in an in vitro assay. It is also produced by the endophytic root fungus Serendipita indica and exhibits antifungal activity against Colletotrichum truncatum . It was particularly surprising that bis(2-ethylhexyl) phthalate, diethyl phthalate, and dibutyl phthalate were detected in this study (Table ). These components are often used as plasticizers, condensing agents, anti-wear agents, and gas chromatographic stationary liquids for polyvinyl chloride resins. The plasticizers are mixed with some food oils to reduce product costs and should not be used as effective components of AEOs. Studies have shown that excessive intake of these plasticizers can have adverse effects on human reproduction, development, and the cardiovascular system . The total content of plasticizer added in the S4 sample accounted for about 23.9%. The quality of S4 essential oil was poor, and some samples were also detected but low, which might be due to the accumulation of plants themselves, or pollution caused by GC analysis . Sesquiterpenoids are natural terpenoids containing 15 carbon atoms in a molecule composed of three isoprene units . In addition to the viridiflorol, by comparing various samples, we have identified the following 6 common components that were worth noting: elemol (sesquiterpene),γ-eudesmol (sesquiterpene), –aristolene (sesquiterpene), agarospirol (sesquiterpene), 2(3H)-naphthalenone, 4, 4a, 5,6,7,8-hexahydro-4a, 5-dimethyl-3- (1-methyllidene)-, (4ar cis)- (sesquiterpene) and 2-phenyl-4 H-chromen-4-one (chromone derivative) (Table ). According to modern pharmacology, sesquiterpene components of agarwood have good biological activity in the central nervous system, respiratory system, and digestive system, etc. Elemol is a natural product that sesquiterpenoid has a role as a fragrance, showing modest antioxidant, anti-inflammatory, and antiproliferative activities of the essential oil of Cymbopogon nardus , . Plants with aromatic properties have multiple chemical components in their essential oils, such as the main component of Blepharocalyx salicifolius , which was the viridiflorol and eudesmane sesquiterpenes , . In the fungus-mediated fermentation of resinous agarwood, the most significant finding was the appearance of key agarwood sesquiterpenes such as agarospirol, γ-eudesmol, (−)-aristolene . A sesquiterpenoid 2-(3H)-naphthalenone, 4, 4a, 5, 6, 7, 8-hexahydro-4a, 5-dimethyl-3- (1-methylidene)-, (4ar cis)-, which has not been found in other agarwood literatures. However, it showed higher performance in the comparison of common components in this study, and its relative content was higher than other components. One of the main active components of agarwood, chromone, has been isolated and found to have 240 different subunits. It has anti-inflammatory and anti-tumor properties, neuroprotective effects, and inhibitory effects on acetylcholinesterase, tyrosinase, and glucosidase . It is worth noting that 2-(2-phenylethyl) chromones often do not appear in the essential oil extracted by hydro-distillation. However, among the four chromones analyzed in this study, the common component, 2-phenyl-4H-chromen-4-one, was presented in the 5 regions, and this chromone component has not been found in other literatures. There were significant differences in the relative content of each component, which might lead to differences in special flavors of agarwood from different habitats. PLS-DA is the deformation of PLS, used to establish classification and is suitable for supervised discriminant analysis methods with small intergroup differences . It is applied to prediction and descriptive modeling, as well as selecting discriminative variables, determining the chemical compositions from different genotypes and product regions, and automatically generating more important principal components , . The PLS-DA model displayed clear separation among the 7 regions and 3 genotypes of AEOs (Fig. a). The software automatically generated R 2 X (cum) = 0.848, R 2 Y (cum) = 1, and Q 2 (cum) = 0.854 for predictive ability. In a previous agarwood study, HPLC chromatograms were used in combination with multivariate statistical screening to establish the identification methods for wild and cultivated agarwood. The Fisher linear recognition model and the PLS-DA recognition model were established . This study established PLC-DA based on GC-MS data (Fig. a), Q 2 > 0.5 indicating a strong predictive ability. The result showed that there were significant differences in the volatile components of AEOs from different habitats. In addition, the discrimination of different genotypes was provided with a certain effect. The Permutation validation in SIMCA 14.1 software was used to verify the fitting of PLS-DA (Fig. b). Through 200 iterations of permutation testing, the model results showed that the Y-axis intercept was all less than 0, indicating that the PLS-DA model validation results were fitting and reliable. Alternative, the Hotelling’s T2 analysis also verified that all samples were within the 95% confidence interval , , validation results provided a more evaluation of model performance. (Supplementary file Fig. S2). Supervised methods offer another approach to classification enhancing the discrimination between specimens by minimizing variance . In this study, PLS-DA was utilized to classify AEOs generated in China (CNA) and non-China (OCA). The model indicated that S4 - S7 were dispersed OCA, regardless of the first principal component or the second principal component. AEOs (S1-S3) produced in CNA showed a quadrant of aggregation (Fig. c). The software automatically generated R 2 X (cum) = 0.84, R 2 Y (cum) = 1, Q 2 (cum) = 0.981, suggesting that the difference in volatile components of AEOs in OCA was significantly higher than that of CNA. Through 200 iterations of permutation testing, the model results showed that the Y-axis intercept was all less than 0, indicating that the PLS-DA model validation results were fitting and reliable (Fig. d). However, the model was not effective for screening differential volatile markers (Fig. c), so we conducted OPLS-DA to analyze the strategy of identifying these markers. Supervised OPLS discriminant analysis (DA) was applied to identify the volatile markers for AEOs from different habitats. OPLS has excellent external prediction ability as well as a better visualization effect compared with PLS . In the OPLS-DA scatter plot (Fig. e), the R 2 X, R 2 Y, and Q 2 of S1-S3 AEO samples from China and S4-S7 samples from outside China were 0.84, 1, and 0.997, respectively. The samples were located on both sides of the positive and negative axes of the first principal component with the X-axis was at 0, indicating that the volatile components of AEOs produced in CNA (S1-S3) could effectively distinguish the two quadrants from AEOs (except S5, S4-S7) in other regions, and the genotype and relative content of AEOs were different. The 200 permutation tests were conducted to verify the OPLS-DA model, and the Hotelling’s T2 analysis showed that all samples were within the 95% confidence interval. In the multivariate statistical analysis, S5 was an outlier phenomenon, with the difference components being the largest compared with the representativeness of other samples. The difference components of S5 could be used as the volatile markers of A. crassna . Although the PLS-DA genotype discrimination effect was good, the regional characteristics were not obvious in the quadrant, and the SIMCA software could not present the variable influence on projection (VIP). The difference between OPLS-DA groups was maximized, and the difference within the groups was weakened, which was more suitable for the separation of samples between groups . Therefore, the VIP diagram was presented for further analysis. The VIP value and S-plot evaluation method were employed to identify the key components contributing to the grouping of AEOs. The S-plot, a scatter plot combining covariance and correlation loading profiles resulting from an OPLS-DA model was utilized . Variables with a VIP greater than 1 were deemed statistically significant and served as important markers of the model , . The VIP values (Fig. a) and S-plot (Fig. b) generated by the OPLS-DA model revealed 26 components with a VIP value > 1 (Table ). These potential component values far away from the origin of S-plot represented variables that contributed a lot to the classification and were more reliable than the near origin components as potential markers to distinguish the AEOs from different producing regions. Statistical tests such as SPSS was carried out on significant variables to make the model acceptable. It is worth noting that sesquiterpenes and chromones are the index components of agarwood, sesquiterpenes α-gurjunene (VIP = 4.86), agarospirol (VIP = 2.86), alloaromadendrene (VIP = 2.49), -aristolene (VIP = 2.37) and chromone 2-phenylethyl-4H-methylene-4-one (VIP = 2.63) were significantly VIP > 2 components between CNA and OCA. While α-gurjunene exhibited the highest VIP value, especially in “Hui-An” agarwoods. OPLS-DA analysis revealed, but this component did not appear in the CNA (S1-S3) group (Table , No. 19). The main factor affecting the VIP value was the detection of 40.82% content in S5, but this outstanding VIP value was also easy to distinguish among complex plant metabolites. Moreover, most aromatic components carry distinct aromas associated with AEOs . Although bis(2-ethylhexyl) phthalate raises concerns due to its toxicity, and it remains uncertain whether it originated from AEOs or pyrolysis during extraction , . Due to the detection of 16.4% content of bis(2-ethylhexyl)phthalate in S4 sample, it also affected the VIP value of inter genomic comparison analysis. However, this could also clearly highlight the identification of this component. AEOs from CNA contained more guaiol, and OCA (Malaysia, Vietnam, and Cambodia) contained more α-gurjunene. The total relative contents of differential components between the two producing regions were higher in OCA. Notably, two sesquiterpenes, α-gurjunene and agarospirol, stood out in the S-polt diagram, being distanced from the origin and the main compound groups (Fig. b). Specifically, α-gurjunene (VIP = 4.86) significantly influenced the grouping of AEO samples and was positively correlated with the grouping of AEOs. Prior studies have employed OPLS-DA model to discriminate between the A.sinensis and its subspecies “Chi-Nan” and to identify potential distinguishing components. Notably, sesquiterpenes, particularly guaiane and eudesmane derivatives, were considered key markers contributing to their odoriferous properties . Similarly, the sesquiterpenes in AEOs also exhibited significant differences, indicating their potential as characteristic components. In this study, OPLS-DA effectively modeled two or more classes. In addition to the CNA and OCA analyses mentioned above, three agarwood genotypes, A. sinensis , A.malaccensis and A.crassna , were classified and compared according to the pairwise genotypes (Fig. ). The OPLS-DA model analyzed different components among different producing regions. The model results indicated that when comparing A.sinensis and A.malaccensis , R 2 X (cum) = 0.689, R 2 Y (cum) = 1, Q 2 (cum) = 0.86 ; when comparing A.sinensis and A.crassna , R 2 X (cum) = 1, R 2 Y (cum) = 1, Q 2 (cum) = 1, when comparing A.malaccensis and A.crassna , R 2 X (cum) = 1, R 2 Y (cum) = 1, Q 2 (cum) = 1, indicating that the models could describe most of the GC-MS data and possessed good predictive ability. The volatile components of AEOs exhibited certain similarities within the same genotypes, but differences existed between different genotypes. VIP values and S-polt diagrams were used to screen the differential chemical components contributing the most to the pairwise genotype group (Fig. ). The VIP value results showed that there were 25 components with a VIP value > 1 between A. sinensis and A. malaccensis , with sesquiterpene guaiol (VIP = 2.55) being the largest contribution component between the two genotypes (excluding bis(2-ethylhexyl) phthalate) (Fig. a). Comparing A. sinensis and A. crassna , 25 components with VIP value > 1 were identified, with α-gurjunene (VIP = 5.28) being the largest contribution component (Fig. c). Between A. malaccensis and A. crassna , 22 components with VIP value > 1 were found, with α-gurjunene (VIP = 5.03) being the largest contribution component (Fig. e). The results of the differential component analysis revealed that AEOs from A. sinensis contained relatively more guaiol and 2-phenethyl-4H-chromen-4-one, whereas those from A. malaccensis contained more sesquiterpene 2- (4a, 8-dimethyl-2,3,4,5,6,8a-hexahydro-1 H-naphthalen-2-yl)propan-2-ol (Fig. a and b). Additionally, when comparing AEOs of A. sinensis and A. crassna , aside from the differences in guaiol, A. sinensis AEOs also contained more agarospirol and 2-phenethyl-4H-chromen-4-one, while the AEOs of A. crassna contained more sesquiterpenes α-gurjunene and alloaromadendrene (Fig. c and d). Compared with AEOs in pairwise genotypes, A. malaccensis and A. crassna , the α-gurjunene exhibited significant differences in composition contribution (Fig. e and f), with AEOs of A. malaccensis containing more sesquiterpene 2-(4a, 8-dimethyl-2,3,4,5,6,8a-hexahydro-1H-naphthalen-2-yl)propan-2-ol, while AEOs of A. crassna contained more sesquiterpene γ-eudesmol. These contribution classifications of pairwise genotypes could serve as potential markers to distinguish AEOs of different species. The results demonstrated that the production regions of AEOs could be better distinguished based on chemometrics. Analysis of such multivariate data requires methodology capable of handling both the contribution to the OPLS model, i.e., concentration variant, and correlation to the OPLS model, i.e., concentration invariant . Statistical SPSS test ( P < 0.05) was carried out on significant variables to make the model acceptable. Based on the above OPLS-DA results, the unique phytochemical characteristics of various species may be related to the genetic information of primitive plant germplasm or endophytic fungi. The current strategy focuses on this complex problem, emphasizing the strategy of obtaining additional information when appropriate multivariate modeling is combined with appropriate and effective visualization of specific marker metabolites to identification. AEOs have a significant international market through hydro-extraction, particularly in Muslim regions. For the first time, we utilized GC-MS to delineate the chemical fingerprints of AEOs in three primary genotypes: A. sinensis , A malaccensis and A.crassna , and analyzed the differences in aroma components across various production regions. Metabolomics data typically encompass vast dynamic ranges in metabolite concentration. Here, we reveal distinctive differences in sesquiterpenes, chromone and its derivatives, and low-molecular-weight aromatic compounds. A total of 127 compounds were identified from the AEOs, with sesquiterpenes comprising the majority, totaling 73 components. The aromatic compound 4-phenyl-2-butanone was the sole common component among the seven samples. Additionally, there were 7 common components with a higher occurrence of sesquiterpenes and chromone: viridiflorol; elemol; γ-eudesmol; –aristolene; agarospirol; 2(3H)-naphthalenone, 4, 4a, 5,6,7,8-hexahydro-4a, 5-dimethyl-3- (1-methyllidene)-, (4ar cis)- and 2-phenyl-4H-chromen-4-one. It was particularly surprising that plasticizers bis(2-ethylhexyl) phthalate, diethyl phthalate and dibutyl phthalate were detected in this study. The total content of plasticizers added in S4 sample accounted for about 23.9%, considering the poor quality of S4 essential oil. Other samples exhibited low levels of detection, likely due to contamination during GC analysis. PLS-DA and OPLS-DA methods were employed for multivariate statistical analysis of the differential chemical components between different genotypes and habitats. The results demonstrated that the AEOs from different habitats could be effectively classified and identified based on GC-MS combined with chemometrics. In OPLS-DA, 26 differential markers including 17 sesquiterpenes, 2 chromones and 3 aromatics, were identified according to VIP value. The VIP value and S-plot generated by the comparison of regional groups (CNA and OCA) in the OPLS-DA model showed a total of 26 potential markers in VIP > 1, and a total of up to 25 potential markers were generated by comparison of two genotypes. The components of agarwood such as α-gurjunene, agarospirol, guaiol, γ-eudesmol and 2-phenethyl-4H-chromen-4-one were searched and summarized in the literature related to agarwood, which contributed the most to the VIP value. The unique phytochemical characteristics of agarwood may be related to the interactive information of original plant germplasm or invasive microorganisms. The current strategy focuses on this complex issue. By using multivariate statistical analysis, the indicator components can be scientifically highlighted, even if additional chemicals are added to reduce product costs such as plasticizers. Therefore, the strategy emphasizes providing more information and obtaining additional information when appropriate multivariate modeling is combined with appropriate and effective visualization of specific marker metabolites for identification. Experimental section Plant materials Seven regions of AEOs were collected from Guangxi, Hainan and Taiwan for the China areas, and from Vietnam, Cambodia and Malaysia producing areas for Southeast Asia. Essential oils were obtained through water distillation or steam at the production regions and local shop purchase. Six samples were randomly selected from each planting region. See Table for the source information. Plant materials Seven regions of AEOs were collected from Guangxi, Hainan and Taiwan for the China areas, and from Vietnam, Cambodia and Malaysia producing areas for Southeast Asia. Essential oils were obtained through water distillation or steam at the production regions and local shop purchase. Six samples were randomly selected from each planting region. See Table for the source information. Seven regions of AEOs were collected from Guangxi, Hainan and Taiwan for the China areas, and from Vietnam, Cambodia and Malaysia producing areas for Southeast Asia. Essential oils were obtained through water distillation or steam at the production regions and local shop purchase. Six samples were randomly selected from each planting region. See Table for the source information. Accurately weigh 30 mg of essential oil in a 5 mL EP tube, then add 2 mL of ethyl acetate solution (China National Pharmaceutical Group Chemical Reagent Co., Ltd., China) to dissolve. Shook well and let it stand for 2 h. Extract 1mL of the essential oil solution and filter it through a 0.45 μm filter membrane, preparing it for gas chromatography-mass spectrometry analysis. The compositions of the essential oils were analyzed by GCMS-QP2010 Plus (Shimadzu, Tokyo, Japan), equipped with an SH-Rxi-5Sil MS Cap. column (30 m × 0.25 mm i.d., 0.25 μm film thickness; Shimadzu, Japan). The temperature program was as follows: initial temperature 90 °C for 2 min, then increased by 2 °C min −1 to 150 °C and held for 5 min, and then increased by 2 °C min −1 to 280 °C and held for 5 min. The other parameters were as follows: injection temperature, 250 °C; ion source temperature, 230 °C; EI, 70 eV; carrier gas, He at 1 ml min −1 ; injection volume, 1 ml; spilt ratio, 1:20; solvent delay of 2.5 min and mass range, m/z 50–550. Quantification was obtained from percentage peak areas from the gas chromatogram. Identification of individual compounds was carried out using the NIST2020 (National Institute of Standards and Technology, US. Department of Commerce) Registry of Mass Spectral Database to search the compounds of authentic references. Chromatographic results expressed as area percentages were calculated with a response factor of 1.0. Methodological examination Precision test In S1–S7 regions of AEOs from different sources (Table ), randomly selected one region, such as S1. Out of the six samples in each region, equal amounts were drawn and thoroughly mixed to form one sample. The test solution was prepared according to the above preprocessing description, and GC-MS analysis was conducted under the above chromatographic and mass spectrometric conditions. Following the same process, the analysis was repeated six times on the mixed S1 sample. The six data points were compared using the Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (Version 2012) (Chinese Pharmacopoeia Commission, China), and a similarity of no less than 0.99 indicated fine precision of the instrument. Repeatability test For the repeatability test, samples of AEOs from the same source (such as S1) were used. Six samples were made according to the steps description above. The weighing of each sample had to be precise. GC-MS analysis was conducted as described above. The six data points were compared using the Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine, and a similarity of no less than 0.99 indicated good repeatability of the method. Stability test Any sample solution from S1-S7 was randomly selected, and the selected sample (such as S1) was dissolved into a tube of solution following the preprocessing steps. The solution was stored for different times: 2, 4, 6, 8, 12, and 24 h for GC-MS analysis. The six data points were compared using the Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine, and a similarity of no less than 0.99 indicated that the test solution was stable within 24 h. Data processing Each experiment was repeated three times. Based on the NIST2020 database, the volatile components of the samples were qualitatively analyzed by mass spectrometry. Peak area normalization was used to calculate the relative percentage content. Substances with a similarity greater than 85% were identified as potential chemical components of AEOs. Using the software of Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (Version 2012) with a time window of 0.2, automatic matching was performed through multi-point correction using the median method. The similarity and common peaks between each sample and the reference map were calculated, and a GC-MS fingerprint map was constructed. Multivariate analysis The SIMICA14.1 software (Umetrics Co., Sweden) for multivariate data analysis was used. The compound data was normalized, and then the software performed diversified statistical analysis through PLS-DA analysis and OPLS-DA modules. PLS-DA and OPLS-DA were introduced for discrimination and derivation of potential markers (VIP score > 1) . Finally, the cluster analysis was carried out in combination with SPSS27.0 data processing software. The univariate statistical analysis was introduced to confirm those differentially expressed features ( p < 0.05). The cluster analysis used between-cluster linkage, and the Euclidean distance was used as a sample measure to determine the difference between the producing regions and species of AEOs. Precision test In S1–S7 regions of AEOs from different sources (Table ), randomly selected one region, such as S1. Out of the six samples in each region, equal amounts were drawn and thoroughly mixed to form one sample. The test solution was prepared according to the above preprocessing description, and GC-MS analysis was conducted under the above chromatographic and mass spectrometric conditions. Following the same process, the analysis was repeated six times on the mixed S1 sample. The six data points were compared using the Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (Version 2012) (Chinese Pharmacopoeia Commission, China), and a similarity of no less than 0.99 indicated fine precision of the instrument. In S1–S7 regions of AEOs from different sources (Table ), randomly selected one region, such as S1. Out of the six samples in each region, equal amounts were drawn and thoroughly mixed to form one sample. The test solution was prepared according to the above preprocessing description, and GC-MS analysis was conducted under the above chromatographic and mass spectrometric conditions. Following the same process, the analysis was repeated six times on the mixed S1 sample. The six data points were compared using the Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (Version 2012) (Chinese Pharmacopoeia Commission, China), and a similarity of no less than 0.99 indicated fine precision of the instrument. For the repeatability test, samples of AEOs from the same source (such as S1) were used. Six samples were made according to the steps description above. The weighing of each sample had to be precise. GC-MS analysis was conducted as described above. The six data points were compared using the Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine, and a similarity of no less than 0.99 indicated good repeatability of the method. Any sample solution from S1-S7 was randomly selected, and the selected sample (such as S1) was dissolved into a tube of solution following the preprocessing steps. The solution was stored for different times: 2, 4, 6, 8, 12, and 24 h for GC-MS analysis. The six data points were compared using the Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine, and a similarity of no less than 0.99 indicated that the test solution was stable within 24 h. Each experiment was repeated three times. Based on the NIST2020 database, the volatile components of the samples were qualitatively analyzed by mass spectrometry. Peak area normalization was used to calculate the relative percentage content. Substances with a similarity greater than 85% were identified as potential chemical components of AEOs. Using the software of Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (Version 2012) with a time window of 0.2, automatic matching was performed through multi-point correction using the median method. The similarity and common peaks between each sample and the reference map were calculated, and a GC-MS fingerprint map was constructed. The SIMICA14.1 software (Umetrics Co., Sweden) for multivariate data analysis was used. The compound data was normalized, and then the software performed diversified statistical analysis through PLS-DA analysis and OPLS-DA modules. PLS-DA and OPLS-DA were introduced for discrimination and derivation of potential markers (VIP score > 1) . Finally, the cluster analysis was carried out in combination with SPSS27.0 data processing software. The univariate statistical analysis was introduced to confirm those differentially expressed features ( p < 0.05). The cluster analysis used between-cluster linkage, and the Euclidean distance was used as a sample measure to determine the difference between the producing regions and species of AEOs. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Interpersonal attunement in social interactions: from | 14b79591-260c-4e75-938a-290f6e9bbe5c | 9791489 | Physiology[mh] | . Interpersonal attunement in and through social interaction People tend to ‘fall in synchrony’ in social interactions, e.g. through spontaneously aligning rhythmic behaviours, such as gait and clapping or adopting others' mannerisms, from shaking a foot or scratching the head to adopting each other's speech styles, emotions and moods . Intriguingly, humans oftentimes attune to each other beyond personal cost-benefit evaluation (e.g. informal social norm following; ) and even against their actual intention . Furthermore, interpersonal attunement in social interactions has not only been demonstrated at the behavioural, but also at the neural level, including phenomena ranging from motor synchrony and shared psychological perspectives , to collective musical experience and real-life social relationships . All of these instances of interpersonal attunement can be thought of as facilitating human communication, collaboration and eventually trust-based social relationships, by virtue of locally decreasing interpersonal complexity , while providing new tools of self-regulation at the individual level. Indeed, from developmental psychology to social neuroscience and psychiatry, an increasing number of findings have demonstrated the critical role of interpersonal coordination in the development and maintenance of our social—and even individual—abilities. It is through co-ownership of social interactions that babies learn the distinction between self and other , the temporality of social exchanges , and the notion of a shared world . Later on, it is through engagement in social interaction that cognitive abilities such as language and ‘theory of mind’ emerge, leading to the ability to effectively communicate and make sense of one's own and other persons' intentions and beliefs, but also to the generation of certain social biases associated with expectations towards social norms . In a nutshell, interpersonal attunement in early social interactions lay the ground for human becoming, via the co-construction of progressively symbolic communication and its internalization as inner speech . In this regard, our view is inspired by and consistent with descriptions provided by Vygotsky, who already at the beginning of the previous century suggested that all ‘higher’ mental processes within an individual result from an internalization of prior social interactions between people . Furthermore, he proposed that every mental function of this kind appears twice in a child's development, first at a social level (i.e. ‘intermind’) and then at an individual level (i.e. ‘intramind’): in brief, he suggested that ‘through others we become ourselves’. Taken together, interpersonal attunement, here, is defined as the set of the multi-scale processes of establishing joint and eventually collective interaction patterns and expectations about states of the world and social behaviour . Such interpersonal attunement, in turn, enables and facilitates intrapersonal attunement and vice versa, here, taken as the set of self-regulation processes (e.g. inner speech and interoception ). Critically, interpersonal attunement does not only include harmonic developments, but also at times dramatic tensions and conflicts that appear instrumental in driving change across the entire lifespan . In this light and leaning on diverse perspectives, ranging from second-person neuroscience, dialectics and enactivism to dynamical systems, active inference and machine learning, in this article, we argue that a multi-scale, fine-grained analysis of social interaction might help us to elucidate the underlying behavioural and neural mechanisms both at the level of the individual, but also at the level of interacting bodies and minds. More concretely, in the current section, we discuss the role of interpersonal attunement in social interactions and how it shapes the formation of the self. In the second section, we describe how psychopathology can be construed as interperpersonal misattunement . In the third section, we describe the paradigm of collective psychophysiology as a methodology to empirically study interpersonal (mis)attunement. In the fourth section, we suggest an integrative clinical, empirical and computational research line, which aims at formally defining a multi-dimensional relational space of conditions , embracing different levels of analysis and how developments could contribute to what could be described as an inter-personalized psychiatry . We end this article by describing some of the societal implications of our approach. (a) The dialectics of internalization and externalization In this first section, we place our focus on the individual, reviewing human development and becoming as the dynamic interplay between (social) internalization and (collective) externalization in and through social interaction . On the one hand, internalization, here, can be thought of as the co-construction of bodily structures actively reflecting the social world and the organism. On the other hand, externalization is taken as the collective construction of the world, including other (inter-)bodily structures. Along those lines, the collective and the level of the individual appear inextricably linked, as an interrelation between the formation of multi-scale hierarchical bodily structures (e.g. psychophysiological states) and interpersonal statistical regularities beyond the individual (e.g. social norms). Below we further unpack the key notions of internalization and externalization, while connecting them to recent computational theories of brain and bodily function. Our notion of internalization is based on the Vygotskian take, thought of as the internal reconstruction of an external operation , further operationalized within the predictive processing framework (; see also ). Predictive processing has been defined as a hierarchical bidirectional process through which an organism adjusts itself in order to (Bayes) ‘optimally’ predict environmental and bodily regularities. With regard to brain function, predictions are continuously generated and propagated from higher levels of the neural hierarchy to lower ones in an attempt to explain away so-called prediction errors, i.e. the discrepancy between incoming information and generated predictions. On the other hand, prediction errors are propagated from lower levels of the hierarchy to higher ones in order to ‘optimally’ reconfigure the organism. Of note, in the framework of predictive processing, higher (deeper) levels of the neural hierarchy are thought of corresponding to higher levels of abstraction. In brief, organisms are continuously trying to optimize their expectations, via minimizing overall prediction error, across various scales, through functions such as perception and learning. In doing so, organisms maximize their odds of survival via prediction error minimization. An organism is assumed to achieve this by virtue of keeping—at least temporarily and locally—entropy low, or, in other words, the states the organism can visit as a system bounded (e.g. body temperature around 37°C). Notably, expectations in this framework cover a wide spectrum of controlled processes, from conscious actions and thoughts to ‘automatically’ adjusted interoceptive states. Yet, humans are not mere passive spectators who just assimilate reality. On the contrary, they actively interact with their world, including other persons, modifying it to meet their expectations through processes of externalization (cf. active inference; ). For instance, when the perceived partial pressure of carbon dioxide surpasses certain bounds, the respiratory system is in charge of keeping it within expected levels, preserving bodily order and thereby survival. To give another example, when a person sits in the cold for an extended period of time, their body temperature tends to fall below predicted values. Trembling, lighting a fire, or entering a warm facility often reverses this trend and aids in keeping the body temperature within boundaries that are conducive to survival and well-being. Such prediction error minimization processes of actively controlling the body and the environment, with the goal of actively transforming the world and the body such that they conform with prior expectations, has been referred as active inference (; see also ). Importantly, in real life, the above-mentioned processes of internalization and externalization should be considered as inextricably linked. To this end, in the next section, we will situate these accounts in the sociocultural realm through a dialectical prism, emphasizing what we describe as ‘becoming-with’ (on a collective level) and its interrelation with ‘being’ (on an individual level). (b) The dialectics of the individual and the collective The above-mentioned hierarchical structures of predictive processing, in our view, should be considered as collectively shaped. First, we dynamically ‘embody’ each other in and through social interaction , enabling interpersonal attunement (e.g. interpersonal belief resonance; ). In other words, by engaging in sensory-motor couplings with others in social interactions, we have our bodily structures mutually transformed beyond the here and now . Second, such structures, arguably, unfold within nested time and space scales, from biology and cognition all the way up to society . That is, these multi-scale dynamics encompass bottom-up and top-down processes, even outside the skull of individuals, and thus solving such dialectic requires accepting the complementary nature of reduction and emergence . Various theoretical frameworks have been proposed, from autopoïesis and enaction to coordination dynamics and active inference . While autopoïesis is one of the first examples of such multi-scale frameworks, its attempt to connect biology up to the social level has been limited. Indeed, first-order structural coupling allows for the emergence of cells, and second-order structural coupling allows for the emergence of multi-cellular organisms; social interaction between those complex organisms, in turn, can be thought of as a third-order structural coupling . However, the (bio)logical grounding of autopoïesis assumed a strong solipsism regarding knowledge, and thus the largest autopoïetic unit has always remained a single agent . In fact, this constitutes a key point of disagreement between Maturana and Varela, as well as the later developed paradigm of ‘enaction’, which on the one hand, keeps the inspiration from autopoïesis and on the other hand, aims to explore the link between our individual experience and the interaction—and even co-existence—with others . The coordination dynamics framework had relatively less problems for bridging the micro–macro divide, since it comes from the language of complex systems and nonlinear dynamics. Originally applying the tools of synergetics to the understanding of finger movement , the derived principles were revealed to apply across multiple scales: between brain regions within a given brain, between limbs within a given individual and even between individuals within social groups . The frameworks of predictive processing and active inference have lent themselves to the formal generation and comparison of hypotheses through their associated generative models , yet also have been criticized for an overly passive and detached approach to make sense of human cognition in certain articulations . Additionally, more broadly the field of computational psychiatry can be questioned for largely emphasizing certain aspects of (artificial) cognition (e.g. reinforcement learning, decision-making, as well as representations of reward, punishment and risk as optimization functions), at the expense of other ones (e.g. social dimension, subjective experience and creativity). Therefore, while active inference appears as a potentially unifying framework, a lot of work remains to be done toward a balanced state space which would account for a realistic perspective of the human mind. Having said that, despite their original limitations, within the relevant scientific landscape, the active inference frameworks can be viewed as a powerful toolbox, which shares some neurobiological grounding with autopoïesis and enaction, while at the same time deploys a physics formalism akin to coordination dynamics . Considering the commonalities (the emphasis on complex systems, multi-scale dynamics, uncertainty, embodiment and self-organization to name but a few; ), but also the tensions between the various flavours , we foresee a dynamic convergence between the above-mentioned frameworks, which, in our view, have been already capturing complementary—overlapping at times—projections of the multi-scale dynamics of social interaction and the mind. As a result of their interaction, on the one hand, enaction has been drawing inspiration from the commitment to formalism , while, on the other hand, active inference has been increasingly situated within multi-scale (social) interactions . In doing so, active inference has been aspiring to become a generalized framework in various fields, ranging from philosophy, psychology and psychiatry to neuroscience, robotics and artificial intelligence (e.g. ; ‘a theory of every “thing” that can be distinguished from other “things” in a statistical sense’ as Friston provocatively puts it ; but also note critiques on the scope of current versions of the framework; ). In light of the above-described considerations, active inference, we argue, should not be thought of as exclusively lying within or being restricted to the individual. For example, social norms, architecture and technology may all be understood as a collective effort to optimize predictability via transforming ourselves and the environment in accordance with bodily and interpersonal expectations . Here, bodily expectations could be an adaptive range of certain attributes of a living human body, such as temperature and pressure, but also psychological states, such as the desire for socializing and reproducing, while interpersonal expectations may include whatever another person or a group of persons could expect from us (cf. social norms). We typically try to satisfy both bodily and interpersonal expectations, even if they, at times, appear contradictory (cf. the various conflicts between desires and cultural conventions). Importantly, both types of expectations should be considered as dynamic states of the world and thus ‘historical’ products of interaction. In other words, expectations are multi-scale processes, fluctuating at various scales, from phylogeny and ontogeny to culture and individual psychophysiology. Here, let us examine an illustrative hypothetical scenario on the ontogenic scale, inspired by Vygotsky . A baby, trying to regulate her hunger and thus interoception balance, tries to reach for food. However, she is unable to do so, despite stretching her whole body, even extending the index finger. A caregiver, observing the situation and predicting the baby's goal, brings the piece of food closer to the kid. After multiple repetitions, this statistical regularity of interpersonal coordination can be internalized by the baby, who at some point understands that an extended index finger towards an object denotes the intention of directing the attention of another person to an object in the environment. This is an important realization, as it is in this very process of transforming the interpersonal into an individual mental process, that the baby masters a completely new (mental) tool, through which she can then affect the environment, including others, and critically her own self (cf. self-regulation), in a much more efficient way. Such skills at the interface of the collective and the individual may give rise to core aspects of the human self, such as ‘social agency’, which has been thought of as the ‘sense of self that is gained through the perceived control one exerts over the social world’ . As Vygotsky put it ‘the relation between higher psychological functions was at one time a physical relation between people’ ). Now stretching this example at an intergenerational scale, one could trace back the interpersonal history of various cultural conventions and social norms, which now appear as psychological laws, such as the need to wear clothes even when the weather conditions do not require to do so . Finally, considering possible interactions of these processes at the scale of phylogeny might even allow us to examine the potential social origins and interrelation of certain aspects of human anatomy and human-specific skills. A typical example can be found at the potential multi-scale interplays between the human eye morphology (e.g. ratio of exposed sclera in the eye outline, which allow the other to recognize easier the gaze's direction), the enhanced cognitive abilities for gaze-based interaction and later abstract cognitive skills . These sorts of multi-scale and inextricably linked processes of human becoming while attuning to one another and the environment, is what has been described as dialectical attunement . Taken together, humans actively co-construct and co-regulate—in interaction with other organisms—their ecosocial niches, so that they increase survival chances of not just the individual, but also the social group and the species as a whole. The dialectics of internalization and externalization In this first section, we place our focus on the individual, reviewing human development and becoming as the dynamic interplay between (social) internalization and (collective) externalization in and through social interaction . On the one hand, internalization, here, can be thought of as the co-construction of bodily structures actively reflecting the social world and the organism. On the other hand, externalization is taken as the collective construction of the world, including other (inter-)bodily structures. Along those lines, the collective and the level of the individual appear inextricably linked, as an interrelation between the formation of multi-scale hierarchical bodily structures (e.g. psychophysiological states) and interpersonal statistical regularities beyond the individual (e.g. social norms). Below we further unpack the key notions of internalization and externalization, while connecting them to recent computational theories of brain and bodily function. Our notion of internalization is based on the Vygotskian take, thought of as the internal reconstruction of an external operation , further operationalized within the predictive processing framework (; see also ). Predictive processing has been defined as a hierarchical bidirectional process through which an organism adjusts itself in order to (Bayes) ‘optimally’ predict environmental and bodily regularities. With regard to brain function, predictions are continuously generated and propagated from higher levels of the neural hierarchy to lower ones in an attempt to explain away so-called prediction errors, i.e. the discrepancy between incoming information and generated predictions. On the other hand, prediction errors are propagated from lower levels of the hierarchy to higher ones in order to ‘optimally’ reconfigure the organism. Of note, in the framework of predictive processing, higher (deeper) levels of the neural hierarchy are thought of corresponding to higher levels of abstraction. In brief, organisms are continuously trying to optimize their expectations, via minimizing overall prediction error, across various scales, through functions such as perception and learning. In doing so, organisms maximize their odds of survival via prediction error minimization. An organism is assumed to achieve this by virtue of keeping—at least temporarily and locally—entropy low, or, in other words, the states the organism can visit as a system bounded (e.g. body temperature around 37°C). Notably, expectations in this framework cover a wide spectrum of controlled processes, from conscious actions and thoughts to ‘automatically’ adjusted interoceptive states. Yet, humans are not mere passive spectators who just assimilate reality. On the contrary, they actively interact with their world, including other persons, modifying it to meet their expectations through processes of externalization (cf. active inference; ). For instance, when the perceived partial pressure of carbon dioxide surpasses certain bounds, the respiratory system is in charge of keeping it within expected levels, preserving bodily order and thereby survival. To give another example, when a person sits in the cold for an extended period of time, their body temperature tends to fall below predicted values. Trembling, lighting a fire, or entering a warm facility often reverses this trend and aids in keeping the body temperature within boundaries that are conducive to survival and well-being. Such prediction error minimization processes of actively controlling the body and the environment, with the goal of actively transforming the world and the body such that they conform with prior expectations, has been referred as active inference (; see also ). Importantly, in real life, the above-mentioned processes of internalization and externalization should be considered as inextricably linked. To this end, in the next section, we will situate these accounts in the sociocultural realm through a dialectical prism, emphasizing what we describe as ‘becoming-with’ (on a collective level) and its interrelation with ‘being’ (on an individual level). The dialectics of the individual and the collective The above-mentioned hierarchical structures of predictive processing, in our view, should be considered as collectively shaped. First, we dynamically ‘embody’ each other in and through social interaction , enabling interpersonal attunement (e.g. interpersonal belief resonance; ). In other words, by engaging in sensory-motor couplings with others in social interactions, we have our bodily structures mutually transformed beyond the here and now . Second, such structures, arguably, unfold within nested time and space scales, from biology and cognition all the way up to society . That is, these multi-scale dynamics encompass bottom-up and top-down processes, even outside the skull of individuals, and thus solving such dialectic requires accepting the complementary nature of reduction and emergence . Various theoretical frameworks have been proposed, from autopoïesis and enaction to coordination dynamics and active inference . While autopoïesis is one of the first examples of such multi-scale frameworks, its attempt to connect biology up to the social level has been limited. Indeed, first-order structural coupling allows for the emergence of cells, and second-order structural coupling allows for the emergence of multi-cellular organisms; social interaction between those complex organisms, in turn, can be thought of as a third-order structural coupling . However, the (bio)logical grounding of autopoïesis assumed a strong solipsism regarding knowledge, and thus the largest autopoïetic unit has always remained a single agent . In fact, this constitutes a key point of disagreement between Maturana and Varela, as well as the later developed paradigm of ‘enaction’, which on the one hand, keeps the inspiration from autopoïesis and on the other hand, aims to explore the link between our individual experience and the interaction—and even co-existence—with others . The coordination dynamics framework had relatively less problems for bridging the micro–macro divide, since it comes from the language of complex systems and nonlinear dynamics. Originally applying the tools of synergetics to the understanding of finger movement , the derived principles were revealed to apply across multiple scales: between brain regions within a given brain, between limbs within a given individual and even between individuals within social groups . The frameworks of predictive processing and active inference have lent themselves to the formal generation and comparison of hypotheses through their associated generative models , yet also have been criticized for an overly passive and detached approach to make sense of human cognition in certain articulations . Additionally, more broadly the field of computational psychiatry can be questioned for largely emphasizing certain aspects of (artificial) cognition (e.g. reinforcement learning, decision-making, as well as representations of reward, punishment and risk as optimization functions), at the expense of other ones (e.g. social dimension, subjective experience and creativity). Therefore, while active inference appears as a potentially unifying framework, a lot of work remains to be done toward a balanced state space which would account for a realistic perspective of the human mind. Having said that, despite their original limitations, within the relevant scientific landscape, the active inference frameworks can be viewed as a powerful toolbox, which shares some neurobiological grounding with autopoïesis and enaction, while at the same time deploys a physics formalism akin to coordination dynamics . Considering the commonalities (the emphasis on complex systems, multi-scale dynamics, uncertainty, embodiment and self-organization to name but a few; ), but also the tensions between the various flavours , we foresee a dynamic convergence between the above-mentioned frameworks, which, in our view, have been already capturing complementary—overlapping at times—projections of the multi-scale dynamics of social interaction and the mind. As a result of their interaction, on the one hand, enaction has been drawing inspiration from the commitment to formalism , while, on the other hand, active inference has been increasingly situated within multi-scale (social) interactions . In doing so, active inference has been aspiring to become a generalized framework in various fields, ranging from philosophy, psychology and psychiatry to neuroscience, robotics and artificial intelligence (e.g. ; ‘a theory of every “thing” that can be distinguished from other “things” in a statistical sense’ as Friston provocatively puts it ; but also note critiques on the scope of current versions of the framework; ). In light of the above-described considerations, active inference, we argue, should not be thought of as exclusively lying within or being restricted to the individual. For example, social norms, architecture and technology may all be understood as a collective effort to optimize predictability via transforming ourselves and the environment in accordance with bodily and interpersonal expectations . Here, bodily expectations could be an adaptive range of certain attributes of a living human body, such as temperature and pressure, but also psychological states, such as the desire for socializing and reproducing, while interpersonal expectations may include whatever another person or a group of persons could expect from us (cf. social norms). We typically try to satisfy both bodily and interpersonal expectations, even if they, at times, appear contradictory (cf. the various conflicts between desires and cultural conventions). Importantly, both types of expectations should be considered as dynamic states of the world and thus ‘historical’ products of interaction. In other words, expectations are multi-scale processes, fluctuating at various scales, from phylogeny and ontogeny to culture and individual psychophysiology. Here, let us examine an illustrative hypothetical scenario on the ontogenic scale, inspired by Vygotsky . A baby, trying to regulate her hunger and thus interoception balance, tries to reach for food. However, she is unable to do so, despite stretching her whole body, even extending the index finger. A caregiver, observing the situation and predicting the baby's goal, brings the piece of food closer to the kid. After multiple repetitions, this statistical regularity of interpersonal coordination can be internalized by the baby, who at some point understands that an extended index finger towards an object denotes the intention of directing the attention of another person to an object in the environment. This is an important realization, as it is in this very process of transforming the interpersonal into an individual mental process, that the baby masters a completely new (mental) tool, through which she can then affect the environment, including others, and critically her own self (cf. self-regulation), in a much more efficient way. Such skills at the interface of the collective and the individual may give rise to core aspects of the human self, such as ‘social agency’, which has been thought of as the ‘sense of self that is gained through the perceived control one exerts over the social world’ . As Vygotsky put it ‘the relation between higher psychological functions was at one time a physical relation between people’ ). Now stretching this example at an intergenerational scale, one could trace back the interpersonal history of various cultural conventions and social norms, which now appear as psychological laws, such as the need to wear clothes even when the weather conditions do not require to do so . Finally, considering possible interactions of these processes at the scale of phylogeny might even allow us to examine the potential social origins and interrelation of certain aspects of human anatomy and human-specific skills. A typical example can be found at the potential multi-scale interplays between the human eye morphology (e.g. ratio of exposed sclera in the eye outline, which allow the other to recognize easier the gaze's direction), the enhanced cognitive abilities for gaze-based interaction and later abstract cognitive skills . These sorts of multi-scale and inextricably linked processes of human becoming while attuning to one another and the environment, is what has been described as dialectical attunement . Taken together, humans actively co-construct and co-regulate—in interaction with other organisms—their ecosocial niches, so that they increase survival chances of not just the individual, but also the social group and the species as a whole. . Interpersonal misattunement in and through social interaction So far, we have considered the importance of interpersonal attunement in social interactions and the formation of the human self. Subsequently, placing our focus on psychopathology, we extend our discussion from attunement to potential misattunement, discussing the dialectical misattunement hypothesis . According to this hypothesis, psychopathology can be viewed not as mere (mis-)function within single brains, but also as a dynamic interpersonal mismatch (for a comprehensive review of the phenomenon as well as the relevant psychophysiological processes see ). As we will examine below, the primary aim of such an approach is to move beyond the individual in the study of psychopathology, yet without neglecting the tightly connected psychophysiological processes at play. More concretely, here, misattunement across persons is thought of as a series of disturbances of the dynamic and reciprocal unfolding of an interaction . Such misattunement results in potentially increasingly divergent prediction and interaction styles and vice versa. Prediction and interaction styles are defined as a set of prior expectations and reaction patterns a person dynamically develops in interaction with the world and others, as discussed above (cf. predictive processing and active inference). The above-mentioned misattunement is not only mediated, but at times reinforced by a selectively designed cultural and technological environment, which is typically meant to conform with and satisfy dominant social standards. A dynamic interpersonal misattunement is—similarly to attunement—expected to unfold across various scales, from seconds to multiple years. First, consider the case of a conversation between two persons. Someone may voice an unexpected opinion or act in an unforeseen manner. In turn, the second person might react defensively. This slight initial misattunement can potentially spiral out of control, while additional factors, such as emotional engagement and internalized social norms might further reinforce such a vicious cycle. An example of such an interacting dyad may consist of an autistic child who, when stressed, might tend to react with repetitive movements and a neurotypical one who has—via exclusively interacting with neurotypical peers—formed a rather strictly tuned set of expectations (cf. ‘narrow priors’ in Bayesian formulation) of what a typical conversational reaction might be. Furthermore, imagine a human relationship. Short-scale interpersonal disturbances might lead to a cumulative misattunement, which can oftentimes go beyond the conscious will of the interacting parts. In other words, day-to-day misunderstandings—if not timely resolved—can potentially lead to a cascade of interpersonal obstacles and increasing personal dissatisfaction, up to an eventual dissolution of a relationship . Now consider a child who along her whole development repeatedly experiences such interpersonal misattunement. In such a case, a persistent social exclusion might actually exert a higher impact on the development of this person, than an initial atypicality in the generation of expectations and reactions. This is likely to prevent her from naturally developing the knowledge and skills a typical person develops in and through the daily social interactions within a given culture. Of note, an interpersonal misattunement, as defined above, lies in the interaction between the two parties and as such it constitutes a collective phenomenon non-reducible to either of them. Here, it should become clear that our view goes beyond an apparently persistent misconception that the responsibility of such a misattunement a priori lies in the individual who might be considered as the atypical one. With regard to larger groups of persons, this kind of misattunement could even take on a cultural and as such intergenerational form. For example, culturally cultivated beliefs in a given society about a specific group of people might highly impact the effectiveness of interaction between in- and out-group persons, eventually provoking intergroup conflict and vice versa. From a Bayesian perspective, one can imagine social stigma and stereotypes as a strict set of prior beliefs operating on a relatively long timescale. Although certain rigid prior beliefs can be adaptive, potentially serving as useful heuristics for quick and effective decisions, they can turn out to be detrimental to human communication and well-being in the long run by segregating social groups and perpetuating inequalities. Here, it is important to underline that, in our view, this kind of interpersonal misattunement should be treated as a phenomenon at the intersection of the individual and the collective. Over-focusing on either of the sides can be counterproductive when it comes to grasping complex phenomena. For instance, an initial (medical) condition can lead to a cascade of other ‘comorbid’ conditions, such as depression and anxiety, not through an actual biologically causal link, but through the interplay of an actual condition and social expectations in a given sociocultural context. Let us consider an illustrative example: a person, who, diagnosed as HIV positive, develops depression the years following their diagnosis. In this case, the psychiatric condition of depression might potentially have to be examined more in relation to an interpersonally aversive environment due to social stigma, than the actual medical condition. In other words, taking social interactions seriously helps to re-emphasize the importance of a genuinely biopsychosocial model of health and disease and argues for a systems approach to medicine with a particular emphasis on dyadic social interactions, which form a crucial interface in connecting different factors and levels of descriptions. So far, we have examined scenarios of misattunement between persons. However, such interpersonal segregations and social interaction disturbances might even take the form of an environmental misattunement for certain groups of people. Let us, here, contemplate a hypothetical scenario: in a world where human height typically exceeds 3 metres, a person of an average height in our world would face severe difficulties in everyday life; even an activity as simple as sitting on a chair in a restaurant would turn into a challenging endeavour. Now contrast this scenario with the everyday life of an autistic person in a neurotypically designed cultural and technological world. Arguably, at least part of their anxiety could be alleviated by reconsidering the design of our living spaces, both real and virtual ones . This kind of inextricably linked and multi-scale processes of dynamically disturbed interaction between persons—mediated by the (cultural) environment—is what we call dialectical misattunement . Importantly, the dialectical misattunement hypothesis makes concrete suggestions with regard to social interactions and interpersonal relationships in real life, amenable to empirical validation: in certain scales ‘interactions within homogeneous dyads are expected to appear smoother compared to heterogeneous dyads. Additionally, tuned interactions of either homogeneous or heterogeneous dyads should appear as most effective’ . To push beyond the 'healthy' versus 'patient' dichotomy, as a first step, it considers interactions not only within neurotypical, but also mixed dyads or groups and crucially between individuals from a certain social group (e.g. individuals from a social or a so-called neural minority). That is, the dialectical misattunement hypothesis proposes moving beyond merely contrasting— a priori and largely arbitrarily defined—groups of individuals toward systematically studying the multi-scale dynamics of social interaction. A focus on interactions between persons sharing a condition, such as autism, will have a dual benefit. First, tapping into interpersonal mismatch processes might result in a more precise analysis of communication efficacy and potential breakdown mechanisms, beyond an exclusively neurobiological aetiology. Second, by taking atypical social interaction seriously, in terms of both research and practice, voice is given to the most relevant part of the population, namely those with a condition themselves. As Milton put it, ‘autistic people will need to be utilising their voices in, claiming ownership of the means of autistic production, and potentially celebrate the diversity of dispositions within and without the culture’. Nevertheless, the dialectical misattunement hypothesis eventually questions a priori dichotomies altogether, aiming at eventually breaking free from acknowledged weaknesses of prominent nosological disease models. As van Praag questioned (2000), ‘are the diagnostic constructs we are used to working with valid and clinically relevant or, rather, pseudo-entities; artefacts of a rigidly applied nosological doctrine’. Dialectical misattunement attempts to bypass such pitfalls by examining psychopathology as a continuum of interpersonal mismatch. In the case of autism, a straightforward way to operationalize this is via studying the interpersonal difference of autistic traits, rather than merely individual traits. For example, following this approach it has been recently shown that the more similar two persons are in autistic traits, the higher is the reported friendship quality . Ultimately, as we will discuss later in this article, such a research line points toward studying psychiatric conditions transdiagnostically, as interpersonal distance in a multi-dimensional feature space. Having said that, we still maintain that considering exclusively the social dimension of psychopathology leads to an incomplete understanding and that psychophysiological mechanisms should be addressed in parallel. Indeed, the boundary of neural, behavioural and social are in the eye of the beholder, and the understanding of the physiological mechanisms supporting social dynamics is as important as the understanding of the impact of social dynamics on individuals' psychophysiological autonomy and balance. Here, dialectical misattunement resonates with the neurodiversity paradigm which, acknowledging the need to address certain aspects on an individual basis, still views psychopathology as a human variation rather than an a priori disorder. In this point, having considered situations of typical and atypical attunement between persons, and also between them and the environment, we now turn to situations where interpersonal attunement fails after having been more or less typical up to a certain point in life. An extended involuntary solitary confinement may constitute such a case. Typically, isolated individuals report multiple perceptual hallucinations—frequently of a social nature . Adopting a dialectical misattunement perspective, we argue that this kind of hallucination might be a way to reduce prediction error due to a discrepancy between strong bodily expectations to (socially) interact and unexpectedly unfolding reality. Indeed, ‘Heidegger provides an analysis of human existence in which being-with (Mitsein) or being-with others is part of the very structure of human existence, shaping the way that we are in the world. … In effect, one doesn't come to have a social constitution by way of interacting with others; one is “hard-wired” to be other-oriented, and this is an existential characteristic that makes human existence what it is’ . Even worse, such situations may turn out to be self-enhancing, as a person who experiences extended isolation can potentially gradually develop a form of ‘social atrophy’, which in turn can contribute to further isolation and so on. ‘Social atrophy’ is a metaphor used to describe a situation in which social skills deteriorate because they are not used as much as expected—like muscle atrophy is used to describe the weakening of our muscles when they remain persistently inactive. Here, we could consider various relevant examples, such as homeless persons, institutionalized or otherwise socially excluded. Such phenomena can at times even apply to whole populations, as it has been the case with the recent repeated COVID-19 lockdowns, aiming at restricting the spreading of the pandemic. Taken together, the theoretical investigation of interpersonal misattunement and how it could be potentially ameliorated and even prevented could be relevant, not only to psychiatry, but various other fields of research and societal practice and could potentially concern each and everyone of us. The dialectical misattunement hypothesis shares certain commonalities with approaches of different fields, such as computational psychiatry and sociology, from which it departs due to its inherently multi-scale nature. For example, leaning on predictive processing and active inference shares common ground with accounts of computational psychiatry, such as the HIPPEA (; for High Inflexible Precision of Prediction Errors) and the aberrant precision one , which attempt to redefine autism as a deficit of domain-general information processing, explaining difficulties such as relevant to theory of mind, executive dysfunction and central coherence under a common umbrella. Notably, while these accounts constitute an important development, touching also upon certain social aspects, they still view the condition from a methodologically individualistic perspective: the deficit or difference lies exclusively in the autistic individual. A relevant account to the dialectical misattunement hypothesis within the sociological field is the ‘double empathy problem’ (DEP), which insightfully questions the ontological status of autism as articulated in prominent cognitivist accounts in favour of an interactional and relational one . Although the DEP has been an important idea, helping shift the perspective to autism away from methodologically individualistic approaches, it still remains relatively agnostic about the key psychophysiological processes. In a nutshell, due to various conceptual and methodological, but also societal constraints, sociological and psychophysiological processes have been largely studied in isolation . In fact, it has been suggested that the study of the single brain can be in-principle sufficient to understand (social) cognition . The dialectical misattunement hypothesis aims at dialectically synthesizing the levels of the individual and the collective through a principled approach. Adopting a Vygotskian perspective, it considers the historical and social construction of the atypical self, while adhering to a scientific understanding of not only interpersonal but also interrelated neurobiological mechanisms. Taken together, the dialectical misattunement hypothesis hopes to serve as a tool for the theoretical, methodological and empirical study of the multi-scale dynamics of psychopathology pushing beyond both reductionistic and descriptive accounts. . Collective psychophysiology: measuring and analysing interpersonal (mis)attunement from a second-person perspective In the previous sections, we reviewed interpersonal attunement and misattunement in social interactions from various angles. Yet, however informative conceptual work might be, an effort to truly go beyond the individual will remain incomplete until put to the test empirically, not only in the laboratory, which allows for great experimental controllability, but also where it really matters, in real life. To this end, here we describe the paradigm of collective psychophysiology, which enables the measurement and analysis of the multi-scale processes of social interaction . More concretely, it, first, embraces the dialectic between the individual and the collective via embedding empirical studies within the context of social interactions, while, second, it synthesizes well-established empirical practices, ranging from multi-person observational and phenomenological approaches to multi-modal neurobehavioral recordings in order to study social phenomena across scales and contexts. Indeed, crucial aspects of interpersonal attunement and misattunement might not be always graspable by so-called spectatorial paradigms, which primarily trigger and monitor either (third-person) inferential or (first-person) phenomenological processes. By contrast—but also complementarily—to such accounts, second-person accounts emphasize the role of the real time and reciprocal dynamics of social interactions in making sense of others: ‘These accounts—sometimes contrastively described as the ‘second-person’ approach to other minds—ask whether social cognition from an observer's point of view is really the most pervasive way of knowing other minds and suggest that social cognition may be fundamentally different when we are actively engaged with others in ongoing social interaction, i.e. when we engage in social cognition from an interactor's point of view’ (; see also ). In fact, it has been suggested that social interaction in and of itself might even constitute—rather than merely contextualize or enable—social cognition (cf. participatory sense making; ). In our opinion, what is crucial in this debate is to preclude the empirical paradigm from a priori prioritizing selected aspects of human experience and psychopathology. In a nutshell, collective psychophysiology comprises a recent empirical paradigm that synthesizes and extends experimental and observational approaches, ranging from strictly structured and free viewing tasks to real-time social interaction and real-life aspects. This allows for the controlled recording of high-resolution datasets from multiple modalities, while retaining adequate degrees of ecological validity. In one can find an example of a two-person psychophysiology set-up, designed for studying the multi-scale dynamics of dyadic real-time social interactions in high resolution in the laboratory. Recording robust and meaningful datasets is the first crucial step towards capturing the essence of complex phenomena. However, the development of suitable analytical methods is of paramount importance. Typical summary measures, averaged over time, offer convenient descriptions; yet, when it comes to non-stationary processes and chaotic-like systems, such as the case of real-time social interactions, such methods fail to capture the critical aspects of the multi-scale temporal dynamics . Therefore, here we anticipate experimental and computational approaches grounded in a synthesis of dynamical systems theories for formally grasping the real-time interpersonal dynamics on one hand and computational accounts of cognition for formally grasping the intrapersonal bodily processes on the other hand (for a preliminary sketch see ). Such an approach could formally show how collective dynamics in social interactions (e.g. quantified by dynamical systems trajectories) are potentially tracked and enacted by the individual (e.g. quantified by active inference states) and vice versa. Put simply, we view both ‘low-level attunement’, which plays out in relatively short spatio-temporal scales (e.g. interpersonal coupling in real-time social interaction) and ‘high-level attunement’, which is achieved through the interpersonal alignment and deepening of (inter-)bodily structures toward increasing abstraction beyond the ‘here and now’ (e.g. formation of social norms). Having said that, low- and high-level attunement should not be viewed as parts of a dichotomy, but rather within their dynamic interrelation . Indeed, recent empirical work has demonstrated that distributed neural dynamics integrates information from ‘low-level’ sensorimotor mechanisms and ‘high-level’ social cognition to support the realistic social behaviours that play out in real time during interactive scenarios . The study of such interplay between real-time ongoing social dynamics at the sensorimotor level, and slower paced representational aspects of social negotiation require new experimental paradigms. This goes from human–machine interaction to human–human interaction , and for the latter a specific neuroimaging technique has contributed to the burgeoning field of interactive social neuroscience: hyperscanning—the recording of multiple brains simultaneously . In two decades, this new approach has not only demonstrated that social perception and social interaction were leading to different neural correlates but also uncovered new interpersonal signatures: inter-brain synchronization . Few neurocomputational models have tried to explain social interaction dynamics, most of them focusing on sensorimotor interaction . Integrating the cognitive part of social cognition thus presents a great challenge for future studies . In a nutshell, our conceptual analyses focused on the importance of studying interpersonal (mis)attunement in social interactions, describing second-person neuroscience and collective psychophysiology as a most promising methodology to this end. In the next section, we introduce the paradigm of inter-personalized psychiatry , which will aim at the clinical and scientific assessment, monitoring and treatment of not only personal, but also interpersonal parameters and states, potentially resulting in a complementary redefinition of current psychiatric conditions. . Towards an inter- personalized psychiatry: the example of the autism space The here-described approach to interpersonal attunement points toward a psychiatry that embraces the individual in all its dimensions, in particular focusing on the interpersonal aspects of mental health, which could be seen as pointing toward the development of an inter- personalized psychiatry. The above-mentioned dialectical misattunement hypothesis has considered psychopathology as a process that unfolds between the level of the individual and the collective and construes psychopathology at least in part as a social interaction mismatch rather than a brain disorder per se . Here, we extend this discussion in order to motivate further conceptual, empirical and computational directions which might allow for a principled redefinition of psychiatric spectrum conditions as space conditions , thereby doing justice to their relational, multi-dimensional and multi-scale nature. To this end, we take autism spectrum as a paradigm example and apply an extended version of what has been described as generative embedding . Generative embedding is a data analysis approach that consists of two steps, namely the generative modelling step, which aims at modelling mechanisms of phenomena, and the discriminative step, which aims at capturing discriminative information in the modelled data. The generative modelling step serves as a meaningful dimensionality reduction from the measurement to a latent space (cf. predictive processing and active inference). The discriminative step deploys machine learning for group classification (e.g. autistic and non-autistic) and feature selection (i.e. selection of crucial parameters for distinguishing between groups). Additionally, this step makes it also possible to adopt an unsupervised scheme, i.e. learning directly based on unlabelled information, allowing for a data-driven identification of statistically meaningful sub-groups without relying on a priori categorical assumptions. Based on generative embedding, we now delineate a research line consisting of four core steps . In the first step, the units of analysis are explicitly defined, and subsequently collective psychophysiology data is acquired in different (social) interaction contexts, aiming at probing different mechanisms (cf. phenomenological [first-person], interactional [second-person] and inferential [third-person] phenomena). In a second step, by repeatedly applying generative modelling, the raw data is projected onto several low-dimensional parameter spaces, one for each experiment. In a third step, the separate parameter spaces are concatenated to form a single hyperspace, by virtue of bringing together all calculated parameters and states. In the fourth step, discriminative approaches are performed for identifying the crucial independent dimensions of the hyperspace, yielding a formalized definition of the feature space. By repeating this cycle of experiments and data analyses, an (increasingly informed) multi-dimensional feature space (cf. autism space within a broader inter-condition space) is constructed, being motivated by, and resulting in, increasingly sophisticated units of analyses and experimental designs. This pipeline is thought to perform repeated cycles: from the definition of the units of analysis and experimental measurements to computational modelling, machine learning, data interpretation and back to the redefinition of the units of analysis. In other words, this research line is meant to perform a periodic movement, without returning to the same point. In short, the proposed procedure is expected to delineate a dynamic multi-scale space of conditions that will be populated by not only individual parameters and states, but also interactive and relational ones , potentially encapsulating fuzzy clusters of (sub-)conditions . Our assumption, here, is that fine-grained, objective measurements of social interactions could help to identify, in a data-driven manner, how belonging to a certain diagnostic category possibly reflects certain social interaction patterns. In doing so, it may also turn out that transdiagnostic markers of social impairments are identified, a fact which might be even more important for treatment than establishing whether someone belongs to a certain diagnostic group. In fact, such an approach may not necessarily be grounded in current diagnostic criteria (allowing for classification in line with existing categorical knowledge), but on the contrary, by following an unsupervised data-driven approach, inherent biases of diagnostic manuals could potentially be ameliorated (by virtue of potentially unveiling novel transdiagnostic avenues). Here, we draw inspiration from, but also extend toward an interpersonal dimension, the vision of Stephan and Mathys : ‘The hope for the future is that the delineation of patient sub-groups characterized by different disease processes, as indexed by mechanistically interpretable models, will allow for principled predictions about individual treatment and, eventually, pave the way towards a new nosology’. Such a multi-scale account of focusing on real-time social interactions and real-life relationships will be critical, as it has been suggested that social interactions may entail processes fundamentally different than situations of passive social observation , while psychiatric disorders, thought of as disorders of social interaction, might be more prominent or may fully manifest in real-time social interactions and real-life relationships . Here, it is important to emphasize the dynamicity of such a definition of a multi-dimensional feature space of conditions. Taking into account that people, their interrelationships within society, as well as the concepts of psychiatric conditions themselves are all dynamic processes, such a procedure does not aim at concluding with a fixed definition, but on the contrary at allowing for continuously capturing the essential dynamics of the co-development of biological processes, individual persons, their interactions and relationships, as well as related emerging concepts, such as social norms in their historical movement and inherent contradiction. It is interesting to note here that our here-described empirical and computational scheme could theoretically accommodate all the above-mentioned processes, as all of them, from neurobiology to social norms, can be thought of as a dynamical process, potentially modelled as Bayesian states. Consequently, this kind of analysis will include not only a given patient, but also her social interactions with others, ranging from significant others to the therapists themselves. In a nutshell, a combined approach of dynamical systems analyses, computational modelling and machine learning will help to provide a mechanistic account of interpersonal misattunement, that is, when, how and why social interactions go wrong, across scales and levels of description, by virtue of bridging the gap between social interaction, behaviour and biology. Our formal approach eventually aims at embracing the individual not only personally but also interpersonally, pointing toward the development of an inter- personalized psychiatry. In fact, we deliberately introduce the new term of inter-personalized psychiatry—in contradistinction to personalized psychiatry —to make a case for taking social interactions seriously across all domains of research and practice, from conceptual, empirical and computational analyses, to clinical and societal practice. The promising notion of personalized medicine has been built around the premise that ‘an individual's unique physiologic characteristics play a significant role in both disease vulnerability and in response to specific therapies’ . While the idea can be traced back to, at least, Hippocrates, according to whom ‘it is more important to know what sort of person has a disease than to know what sort of disease a person has’ (quoted in , the term has gained a revitalized attention recently due to developments of sophisticated methodologies of acquisition and analysis of big (biological) data, e.g. genomics, proteomics and metabolomics. Here, our approach of inter-personalized medicine goes directly beyond the individual, by formally embracing the unique interactional and relational characteristics of dyads and groups of persons, while pushing toward the formal development of ‘sociomics’. Here, we view sociomics as the discipline which will address the blind spot of the systematic acquisition and analysis of real-time social interaction and real-world social relationship data. This, we foresee, will lead to a revolutionization of established methodologies targeting the individual (cf. biomarkers, biofeedback, biometrics, self-report questionnaires and brain stimulation) via widening the spotlight to include the dyad and the social group (cf. sociomarkers, cross-brain neurofeedback or sociofeedback, sociometrics, dyad-report questionnaires and multi-brain stimulation ), akin to the development of hyperscanning as an extension to single-brain imaging . Taken together, developing the paradigm of inter-personalized psychiatry will help bring the measurement of social interaction to the foreground, allowing for the quantitative assessment of psychiatric conditions both within and across interacting individuals. This would offer an additional level of description that has a long-standing history in psychiatry (assessment of so-called psychomotor symptoms), but which can benefit from novel technologies and the integration of interactional and relational data . In other words, this approach could potentially offer a suite of socio- and biomarkers for psychiatric conditions and could help to stratify patients and interaction profiles. Here, we should note that, critically, in addition to certain diagnostic criteria being met that indicate the presence of a disease entity, in psychiatric practice, the subjective suffering reported by individuals (or those in close contact with them) is also assessed. Importantly, it is also assessed whether social participation is reduced as a consequence of the occurrence of a psychiatric condition. Consequently, here we are not suggesting eliminating these established ways of diagnosing and treating in psychiatry. What we are suggesting is that finding ways to quantitatively assess social interactions, without neglecting qualitative dimensions, could help to elucidate the interrelating social, behavioural, psychological and neurobiological mechanisms. Such a development, we argue, will be an important contribution by virtue of generating inter-personalized prediction models for not only advancing our scientific understanding, but also assisting clinical practice and supporting real-life processes, when needed. Of note, while our inter-personalized approach to psychiatry is in agreement with the current dominant view that mental disorders entail neurobiological, psychological, as well as social dimensions, it is in sharp contrast with its bottom-up hierarchical assumption about the interrelation of those dimensions. As Priebe and colleagues insightfully state : ‘[the prevailing paradigm] regards neurobiological aspects as the basis of disorders, which are then expressed in psychological symptoms influenced and managed within a social context. Neurobiological findings tend to be taken as explanations for disorders. Neurobiological processes have been proposed as explanations for how and why interventions work, including psychotherapy.’ Here, our aim is to turn orthodox medicine on its head, by suggesting that what we need to do is to systematically look at the interpersonal fit of persons and how they establish fulfilling social connections, without neglecting the relevant neurobiological processes, or in other words to naturalize the development of the human mind and psychopathology, without neglecting their cardinal social origins. In a nutshell, by bringing together and formally integrating the studies of the individual and the social, inter-personalized psychiatry aspires to go beyond both of them. . Interpersonal (mis-)attunement in society Taken together, our approach to interpersonal attunement in and through social interaction aims at providing the conceptual and methodological tools in order to delineate the multi-scale dynamics of the dialectic between social interaction and the mind. For instance, developmental studies of interpersonal attunement, quantified by collective psychophysiology, interpersonal predictive processing and active inference, as well as dynamical systems as discussed above, might provide novel insights about how interpersonal dialogue is progressively transformed into internal speech. This sort of development could eventually lead to a mechanistic account of abstract concept formation across development, potentially unveiling the social and embodied origins of the human self, while also facilitating the development of artificial intelligence (cf. symbol emergence; ). Perhaps, most importantly, the implications of an ecologically valid, real-world approach to social interaction reach further than any particular field of research. As Vygotsky stated, ‘we cannot master the truth about personality and personality itself as long as mankind has not mastered the truth about society and society itself’ [ , p. 342]. Taking the collective dimension of human becoming in its interrelation with the individual seriously, as a dialectic between inter- and intrapersonal attunement (without neglecting the constructive tensions, conflicts and struggles of misattunement; ), points toward concrete directions for societal practice. For instance, with regard to pedagogy, our approach speaks to an interactive, collaborative and participant-oriented learning framework as opposed to a commonly deployed hierarchical and competitive one . It is also in line with a legal system which takes into account not only individual but also collective responsibility, while rejecting certain rehabilitation practices, such as solitary confinement, as literally dehumanizing. With regard to clinical practice, we suggest an inter-personalized psychiatry, which will be systematically monitoring, evaluating and modulating not only intrapersonal (e.g. psychophysiological and phenomenological), but also interpersonal processes across various contexts. This could include social interactions and interpersonal relationships with the therapist, significant others, within the family, school or work, but also the broad link to society, as this can help align expectations and as such may improve understanding of others. Here, we wish to emphasize the relation between psychotherapist and patient, whose interpersonal (mis)match may require closer evaluation, as not every psychotherapist might be effective for every patient . Along these lines, psychotherapy in a dyadic setting could be investigated by means of interaction-based phenotyping and psychophysiology , which could help to identify communication problems between patient and therapist in order to analyse them in terms of underlying mechanisms, so that the patient (but also the therapist) can learn from them. Furthermore, in psychotherapy in a group setting, a formal communication model could be used in order to explain how persons (of different conditions) perceive the world and others. This could help to explain when and how communication breaks down and what can be done to prevent this. Additionally, our approach points from current individualistically oriented treatment options, such as biofeedback, toward interpersonal ones, such as sociofeedback—from learning to regulate intrapersonal functioning to learning to regulate interpersonal functioning and the use of social niches to stabilize mental health . Before concluding, we would like to underline the interrelation between psychological and socioeconomic processes in social interactions, such as the generation and perpetuation of social stigma and inequality. Social stigma may not function only as a cause of inequality, but also as a result thereof. Therefore, a pragmatic approach to mental health should aim at balancing structural asymmetries within actual society. This could include, but is not limited to, reducing social exclusion, as well as facilitating housing, employment and relational seeking. Taken together, a mutually informed understanding of dysfunction in social interactions—across scales and contexts—and its biological correlates may be exactly the key to finding new, pragmatically efficient research and treatment strategy , but also reminds us of the necessity to collectively work for the societal change needed to promote well-being and mental health for all. |
Considerations into pharmacogenomics of COVID-19 pharmacotherapy: Hope, hype and reality | b4d73ad7-ef87-46ef-926f-bad02a102201 | 9576910 | Pharmacology[mh] | Introduction The COVID-19 pandemic is continuing to wreak havoc with new virus variants continuing to emerge. To the extent the COVID-19 vaccines are not broadly and equitably accessible around the world, the pandemic will, unfortunately, likely continue. Meanwhile, efforts are also underway for the COVID-19 drugs, in addition to the vaccines, thus raising the possibility of the pandemic evolving into an endemic recurring infection in the future. COVID-19 medicines are of broad interest for the prevention and treatment of COVID-19. Pharmacogenomics is a specialty that examines genome-by-drug interactions and has roots in the early 20th century in the field of biochemical genetics. “Drugs don't work in everyone” is the maxim that scholars in the field of pharmacogenetics and personalized medicine know all too well. As COVID-19 drugs are beginning to emerge in clinical practice, it is time to recall this principle and deploy the science of pharmacogenomics. Understanding the mechanisms of person-to-person and between-population variations in drug safety and efficacy is fundamental to rational drug development. Pharmacogenomics is an integral part of rational and evidence-based medical practice. Pharmacogenetics and personalized medicine are not in conflict with public health measures against COVID-19 because they make the expeditious development of new medicines possible by helping forecast their pharmacokinetic and pharmacodynamic properties early in the discovery and clinical trials phase. The aim of the present expert review is to highlight and examine the prospects for pharmacogenomics and personalized medicine for the emerging COVID-19 drugs and some of the drug interventions deployed to date. We sort out the hope, hype, and reality and suggest that there are veritable prospects to advance COVID-19 medicines for public health benefits, provided that pharmacogenomics is considered and implemented adequately. For this narrative review study, the search strategy and information sources was explored in databases of PubMed, ISI, Scopus, Embase, Web of Science and search engines, including Google Scholar between the years 2000–2022 using the terms of medical subject headings (MeSH) and combinations of the keywords according to the following: “COVID-19”, “coronavirus disease 2019”, “coronavirus disease 2019”, “SARS-CoV-2”, “severe acute respiratory syndrome coronavirus 2”, “pharmacogenomics”, “pharmacogenetics”, “pharmacogenetic testing”, “drug related genetics'’, “antivirals”, COVID-19 treatment’’. The abstracted data was screened and extracted by two researchers from the included studies to remove any duplicates, and the selection criteria were strictly adhered to. The accuracy and quality of the included data were checked by a third researcher. The inclusion criteria included screening of the titles and abstracts of eligible studies in English language identified through the keywords found from the search sources and databases. Potentially relevant articles were retrieved for an evaluation of the full text. The research process excluded studies and articles with the following criteria: (1) publications language other than English; (2) articles not discussing SARS-CoV-2 or COVID-19, COVID-19 treatment, drug related genetics or drug pharmacogenomics related to COVID-19 treatment. Antiviral agents 2.1 Remdesivir Remdesivir is a monophosphoramidate nucleoside analogue primarily developed for the treatment of RNA-viruses that have pandemic potential, such as the Ebola virus and members of the Coronaviridae family like SARS, MERS, and human coronaviruses (Eastman et al., 2020). Being an RNA-dependent RNA polymerase (RdRp) inhibitor, remdesivir inhibits the replication of multiple coronaviruses in respiratory epithelial cells . The proposed mechanism of RNA-dependent RNA polymerase (RdRp) inhibitors during COVID-19 is illustrated by . Remdesivir is approved by the US Food and Drug Administration (FDA) in 2020 for the treatment of COVID-19 in adult and pediatric patients requiring hospitalization. Furthermore, the National Institutes of Health (NIH) recommended remdesivir for hospitalized patients who require supplemental oxygen. Although the WHO reported that the clinical trial data shows no significant decrease in mortality, the European Medical Agency (EMA) and the FDA issued regular approval . The plasma half-life is 20 min. Remdesivir is a prodrug metabolized to the pharmacologically active nucleoside triphosphate by carboxylesterase 1 (CES1), cathepsin A, and CYP3A4 (Deb et al., 2021). In addition to CYP3A4, CYP2C8 and CYP2D6 are also responsible for the metabolism of remdesivir. In vivo, remdesivir is predominantly metabolized by hydrolase. Moreover, it is also a substrate for the organic anion-transporting polypeptide 1B1 (OATP1B1) transporter and the P-glycoprotein (P-gp) transporter . The OATP1B1 is encoded by the solute carrier organic anion transporter family member 1B1 (SLCO1B1) gene with several variants that can impact drug disposition. For example, Africans, Asians, and Caucasians have been identified with the rs2306283 c.388A > G which are associated with a decreased transporter function. Other variants exhibit a low frequency and associate with a decreased function of the transporters SLCO1B1 rs56101265, rs56061388, rs72559745, rs4149056, rs72559746, rs55901008, rs59502379 , and rs56199088 . P-gp, an efflux pump encoded by the ABCB1 gene, plays a role in viral resistance and trafficking cytokines and enveloped viruses. Despite the identification of several ABCB1 variants, only the rs1128503 c.1236C > T, the rs2032582 c.2677G > T/A , and the rs1045642 c.3435C > T are relevant in pharmacogenetics studies . CYP3A4 is the most abundant hepatic enzyme system expressed in most populations, with the expression of more than 34 allelic variants. During the inflammatory response of COVID-19, CYP3A4 presents a cytokine-mediated down-regulation via the JAK/STAT pathway, specifically through interleukin-6 (IL-6) . In severe COVID-19 patients, the use of steroid therapy with remdesivir could affect the CYP3A4 transcription, which may result in a lower therapeutic drug level of remdesivir and hence higher IL-6 . CYP2C8 displays low genetic variation, while CYP2D6 is characterized by extensive genetic variability impacting on the enzyme activity. Those with the duplication or multiplication of active alleles are associated with increased enzyme activity . There is a significant interethnic variability for the CYP2D6 variants which might cause differences in response to CYP2D6 substrates in some populations. This is observed with a higher frequency among Caucasians, East Asian populations, and duplication/multiplication of active alleles in Middle Eastern populations, and in Black African populations . The combination of CYP2D6 alleles could also anticipate the metabolic phenotype of the patients. For example, populations carrying two null alleles are considered poor metabolizers; those with one functional allele and one null allele considered intermediate metabolizers; those with two functional alleles are considered extensive metabolizers, while those with duplicated or multiplied functional alleles are considered ultra-rapid metabolizers . Generally, remdesivir has a low risk of significant genetic pharmacokinetic (PK) interactions, and theoretically, polymorphisms and variants of these genes could affect the PK of remdesivir. Therefore, no evidence recommends pharmacogenetic testing before the administration of remdesivir. 2.2 Lopinavir Lopinavir is an antiretroviral protease inhibitor and is exclusively administered in combination with ritonavir. Coadministration with low-dose ritonavir significantly improves the pharmacokinetic properties and hence the activity of lopinavir against HIV-1 protease. This is mainly related to ritonavir which is considered a potent inhibitor of the CYP3A4 enzymes responsible for the extensive metabolism of lopinavir. In addition to reducing biotransformation of lopinavir, this combination also improves absorption and oral bioavailability of lopinavir by inhibiting OATP1B1 and OATP1B3 along with P-gp in the gut wall . CYP3A4 is primarily involved in lopinavir metabolism and is transported by ABCB1 and ABCB2 . On the other hand, ritonavir is metabolized by CYP2J2, CYP3A4, CYP3A5 , and CYP2D6 . Therefore, the simultaneous administration of lopinavir and ritonavir should be avoided with other drugs that are highly potent CYP3A inducers . The antiviral activity of lopinavir is produced via the inhibition of the enzyme 3-chemotrypsin-like protease (3CLpro) that has an important role in viral RNA processing and release from the host cell . In addition, lopinavir blocks a post-entry step in the replication cycle of SARS‐CoV‐2, making lopinavir a promising potential drug for COVID-19 treatment . However, WHO recommends against its use because of the lack of sufficient evidence and potential serious side effects from this combination, such as vomiting, diarrhoea, hypertriglyceridemia, and lipodystrophy . Bioavailability of this antiviral combination can be increased substantially with concurrent ingestion of fatty food. Both agents undergo extensive and rapid first-pass metabolism by hepatic cytochrome P450 (3A4 isoenzyme). With lopinavir/ritonavir 400/100 mg twice daily administration, the elimination half-life and average oral clearance of lopinavir is nearly 4–6 h and nearly 6–7 L/h, respectively. Less than 3% and 20% of the lopinavir dose is excreted unchanged in the urine and faeces, respectively. This antiviral combination has the potential to interact with wide variety of drugs or herbal products via several mechanisms, mostly involving the CYP enzymes and is contraindicated with certain drugs, such as flecainide, propafenone, astemizole, terfenadine, ergot derivatives, cisapride, pimozide, midazolam, triazolam and St. John's wort . Genetic screening of Apo lipoprotein E (APOE) and APOC3 can be initiated to reduce the risk of these complications of hypertriglyceridemia and lipodystrophy associated with ritonavir . Variations of lopinavir concentrations among populations are associated with SLCO1B1. The increased CYP3A4 activity is related to CYP3A4 polymorphism L292P (rs28371759, CYP*18B) causing an increased lopinavir metabolism. CYP3A4 polymorphism L292P is observed more in East Asian populations who are metabolizing lopinavir and ritonavir more rapidly. There are many coding single nucleotide polymorphisms (SNPs) in the ABCB1 gene . The Asian population has a significantly different variant allele frequency of 3435C > T than the African and Caucasian populations . Many non-synonymous polymorphisms in ABCB1 are available, such as S893T, S893A (rs2032582), N21D (rs9282564) and S400 N (rs2229109) , which can increase drug concentrations and produce more drug response via reducing the efflux of ABCB1 . African populations carry the highest frequency of S893A , reaching up to 90%. East Asians carry S893T and Europeans carry N21D . These patient populations may have more responsiveness to the drugs transported by ABCB1 , as shown in . 2.3 Favipiravir Favipiravir is a prodrug purine nucleic acid analogue and potent RdRp inhibitor , that has been licensed as an antiviral medication used to treat influenza since 2014 . Favipiravir selectively inhibits the viral RNA dependent RNA polymerase or causes lethal mutagenesis upon incorporation into the viral RNA. Favipiravir inhibits SARS‐CoV‐2 replication in Vero E6 cells. Emergency approval of favipiravir in adult patients with COVID-19 was announced by the National Medical Product Administration (NMDA) in China (Du and Chen, 2020), and is still in use in various countries as a potential treatment for COVID-19 due to its efficacy against different viral infections. However, this antiviral agent is neither approved by the FDA nor recommended by the WHO . Favipiravir is available in oral form with excellent bioavailability. It is metabolized in the liver by aldehyde oxidase and partially by xanthine oxidase. The efficacy of substrates of aldehyde oxidase, such as azathioprine or allopurinol, is associated with variants of aldehyde oxidase. Hence, allelic variants of aldehyde oxidase and xanthine oxidase genes should be considered in therapy. Overall safety profile is good with some concerns of gastrointestinal side effects and hyperuricemia . 2.4 Molnupiravir Molnupiravir is the 5′-isobutyrate prodrug of the antiviral ribonucleoside analogue β-D-N4-hydroxycytidine (NHC). Molnupiravir was the first oral antiviral drug approved by the United Kingdom Medicines and Healthcare Products Regulatory Agency and by the FDA for the emergency treatment of COVID-19 in adults. However, molnupiravir's safety profile is still under invisatigation and clinical trilas to detect clinically important side effects regardless its safe use in patients with hepatic and renal impairment. Moreover, benefit of treatment has not been observed when treatment started after COVID-19 hospitalization. Therefore, it is neither indicated for use in younger patients than 18 years of age because of its effect on bone and cartilage growth nor for the pre- or post-exposure prevention of COVID-19 . Molnupiravir is an inhibitor of RNA-dependent RNA polymerase (RdRp) that plays an important role in the replication of SARS-COV-2 . The cellular uptake of circulating NHC is involved in the endogenous phosphorylation of pyrimidine nucleoside pathways to form an active ribonucleoside triphosphate (NHC-TP), which binds to the genome of viral RNA (guanosine or adenosine), and then can be substituted to either cytidine or uridine triphosphate, by viral RNA polymerase. This in turn results in the accumulation of many mutations in the viral genome, leading to both viral suppression and inhibition inside the tissues. Molnupiravir is hydrolyzed to NHC by esterases CES1 and CES2 . The conversion of molnupiravir to NHC could be inhibited by genetic variations in the genes encoding esterases CES1 and CES2 . Molnupiravir is a weak substrate of the human nucleoside transporter (CNT1), while NHC is a substrate of the human nucleoside transporters CNT1, CNT2, CNT3, and ENT2. Some patients had no pharmacological response to molnupiravir due to genetic variations in the genes encoding CNT1, CNT2, CNT3 and ENT2 . Being a prodrug , molnupiravir is a 5′-isobutyrate ester is cleaved by esterases present in the gastrointestinal tissues of the intestine and liver during absorption and hepatic first pass, delivering the ribonucleoside metabolite NHC into systemic circulation. This results in only very low levels of molnupiravir detected in the plasma. Distribution of molnupiravir, NHC, and NHC-TP is quantified in lung, spleen, kidney, liver, heart and brain. No data is reported for distribution to other tissues, such as bone and cartilage, the GI tract or reproductive tissues. Since molnupiravir is not stable in plasma, the plasma protein binding of molnupiravir was not assessed. Regarding the metabolisim and pharmacogenomic situation, sufficient and reliable data is not published yet of the phase studies and more information is needed to comment on this part . 2.5 Oseltamivir Oseltamivir is an inactive pro-drug antiviral drug indicated for the treatment of influenza A and B infections via peptide transporter 1 (PepT1) after being converted to the active metabolite oseltamivir carboxylate through the hepatic enzyme carboxylesterase 1 (CES1). This active metabolite inhibits viral neuraminidase (NEU2), thereby blocking progeny viral release from infected cells and viral entry into uninfected cells. However, this antiviral drug can be eliminated before being activated as it is a substrate of P-gp. Oseltamivir effectiveness in the treatment of COVID-19 is still under evaluation by several clinical trials . A liquid formulation of oseltamivir (2 mg/kg twice daily for 5 days) is effective in the treatment of children with influenza, and may be used in high-risk populations, such as the elderly or those with chronic cardiac or respiratory disease. Furthermore, short term administration oral oseltamivir at a dose of 75 mg once or twice daily for 6 weeks significantly prevented the development of naturally acquired influenza by >70% in unvaccinated healthy adults when administered within 48 h of symptom onset in the infected person. The drug also effective when used adjunctively in previously vaccinated high-risk elderly patients. Oseltamivir is generally well tolerated . After oral administration, oseltamivir is rapidly absorbed from the gastrointestinal tract and its absorption is not significantly affected by the presence of food. It has a high oral bioavailability reaching up to 79% and the plasma concentrations are detected within 30 min of an oral oseltamivir dose. It is then extensively metabolized, mainly by hepatic esterases, to its only active metabolite oseltamivir carboxylate . This metabolite is rapidly distributed to the primary site of influenza virus replication (surface epithelial cells of the respiratory tract) after oral oseltamivir administration. Oseltamivir carboxylate is renally eliminated by a first-order process, primarily by glomerular filtration and renal tubular secretion and has a terminal elimination half-life of 6–10 h. Its clearance is reduced in patients with severe renal dysfunction. Furthermore, the clearance is slower in the elderly (≥65 years) and faster in children (≤12 years) than in adults. There are no clinically significant drug interactions detected with oseltamivir . Clinical and pharmacogenetic studies significantly reported inter-individual variability in the pharmacokinetics and the occurrence of adverse drug reactions (ADRs) to oseltamivir related to the CES1 genetic variants . G143E variants were reported to be 3.7% in Whites, 4.3% in Blacks and 2.0% in Hispanic populations . Variations in plasma concentration-time curve of oseltamivir was also found associated with the rs71647871 p. Gly143Glu (Lim et al., 2009). The rs200707504 c.662A > G in CES1 was associated with a decreased antiviral drug bioactivation . Oseltamivir ADRs were found associated with variants in ABCB1, CES1, NEU2 , and SLC15A1 , the gene encoding the transporter PepT1. In this regard, the T allele was predominantly related to the occurrence of ADRs, in contrast to the C allele, which was not associated with the reporting of ADRs . Both CES1 and ABCB1 genetic variants are considered valid biomarkers for the prediction and optimization of oseltamivir pharmacotherapy . 2.6 Atazanavir Atazanavir is a potent protease inhibitor (PI), approved as a component of antiretroviral therapy (ART) regimens administered once daily for the treatment of patients with HIV-1 infection. It inhibits SARS‐CoV2 replication in both Vero cells and human epithelial pulmonary cells. Atazanavir is rapidly absorbed with 60–80% oral bioavailability and reaching peak plasma concentrations (Cmax) after 2–3 h and is metabolized by CYP3A. Atazanavir is ≥ 86% protein bound being 86% bind to albumin and 89% bind to α1-acid glycoprotein . It is extensively metabolized in the liver by CYP3A4 to oxygenated metabolites. After a single 400 mg dose, 79% of atazanavir is mainly eliminated via the biliary route, with only minor elimination via the kidneys (13%). Unchanged drug accounted for 20% and 7% of these quantities . Atazanavir is an inhibitor of CYP3A and UDP glucuronosyl transferase 1A (UGT1A). The drug carries several polymorphisms in UGT1A1 , including a variable dinucleotide (TA) repeat within the gene promoter region . High risk hyperbilirubinemia has been associated with the homozygotes, while intermediate risk hyperbilirubinemia has been associated with the heterozygotes . Therefore, pharmacogenomic counselling before initiating atazanavir therapy is recommended to avoid the development of hyperbilirubinemia . A partial metabolism of atazanavir is mediated by the P-gp efflux pump encoded by the multidrug resistance 1 (MDR1) gene. This increases plasma concentrations of atazanavir in the presence of 3435 variable genetic homozygosis C/C, predisposing the patients to hyperbilirubinemia and severe jaundice. On the other hand, the risk of early onset lipodystrophy is related to polymorphism 238 G > A , while the risk of dyslipidaemia is associated with APOA5 gene polymorphisms (1131 T > C and 64 G > C), APOC3 (482 C > T, 455 C > T, 3238 C > G) , and ABCA1 (2962 A > G) and APOE (2 and 3 haplotypes) . 2.7 Nirmatrelvir Nirmatrelvir is an oral antiviral medication that inhibits SARS-CoV-2 main protease (M pro ). M pro is the focus of extensive structure-based drug design efforts, which are mostly covalent inhibitors targeting the catalytic cysteine, thereby impairing the virus's ability to reproduce itself . This cysteine is responsible for the activity of the 3CL PRO of SARS-CoV-2 and potentially other members of the coronavirus family. 3CL PRO , also known as the main protease or non-structural protein 5. It is responsible for cleaving polyproteins 1a and 1 ab. These polyproteins contain the 3CL PRO itself, a papain-like (PL) cysteine protease, and 14 other non-structural proteins. Without the activity of the 3CL PRO , non-structural proteins (including proteases) cannot be released to perform their functions, inhibiting viral replication. Nirmatrelvir is co-administered orally with a low dose of ritonavir (PAXLOVID™) for the prevention of COVID-19. It reduces the risk of hospitalization or death by 89% compared to placebo in non-hospitalized high-risk adults with COVID-19 . PAXLOVID treatment should be initiated as soon as possible after diagnosis of COVID-19 and within 5 days of symptom onset. The drug is administered orally with or without food at dosage 300 mg nirmatrelvir (two 150 mg tablets) with 100 mg ritonavir (one 100 mg tablet), with all three tablets taken together twice daily for 5 days . In vitro, studies suggest that CYP3A4 has a significant role in the metabolism of nirmatrelvir, which gives the chance to improve the efficacy by co-dosing with a potent CYP3A4 inhibitor like ritonavir . However, the use of ritonavir poses a significant risk of drug interaction due to its potent inhibition profile; patients and clinicians should consult the prescribing information for nirmatrelvir and ritonavir to evaluate any potential for drug interaction with existing medications prior to the initiation of nirmatrelvir. Ritonavir inhibits not only CYP isoenzyme family members like CYP3A4, CYP2D6, CYP2C19, CYP2C8, and CYP2C9 , but also ABCB5 P-glycoprotein and cellular transport mechanisms via the efflux pump, breast cancer resistance protein ABCG2, organic anion transporting polypeptides (hOCT1) in the liver, and multidrug and toxin extrusion protein in renal drug handling (MATE1). On the other hand, it induces CYP1A2, CYP2B6, CYP2C9, CYP2C19 , and the UGT family. Because of the potential for the deadly adverse reactions upon its inhibitions and inductions, Nirmatrelvir is contraindicated with drugs that are highly dependent on CYP3A for clearance. The concomitant use of ritonavir with statins, steroids, sedative hypnotics, anticoagulants, and antiarrhythmic therapies is contraindicated . Dosage adjustment is needed in patients with moderate renal impairment, while the drug is not recommended in patients with severe renal impairment. No dosage adjustment is needed in patients with mild or moderate) hepatic impairment. No pharmacokinetic or safety data are available regarding the use of nirmatrelvir or ritonavir in subjects with severe hepatic impairment; therefore, nirmatrelvir is not recommended for use in patients with severe hepatic impairment . Remdesivir Remdesivir is a monophosphoramidate nucleoside analogue primarily developed for the treatment of RNA-viruses that have pandemic potential, such as the Ebola virus and members of the Coronaviridae family like SARS, MERS, and human coronaviruses (Eastman et al., 2020). Being an RNA-dependent RNA polymerase (RdRp) inhibitor, remdesivir inhibits the replication of multiple coronaviruses in respiratory epithelial cells . The proposed mechanism of RNA-dependent RNA polymerase (RdRp) inhibitors during COVID-19 is illustrated by . Remdesivir is approved by the US Food and Drug Administration (FDA) in 2020 for the treatment of COVID-19 in adult and pediatric patients requiring hospitalization. Furthermore, the National Institutes of Health (NIH) recommended remdesivir for hospitalized patients who require supplemental oxygen. Although the WHO reported that the clinical trial data shows no significant decrease in mortality, the European Medical Agency (EMA) and the FDA issued regular approval . The plasma half-life is 20 min. Remdesivir is a prodrug metabolized to the pharmacologically active nucleoside triphosphate by carboxylesterase 1 (CES1), cathepsin A, and CYP3A4 (Deb et al., 2021). In addition to CYP3A4, CYP2C8 and CYP2D6 are also responsible for the metabolism of remdesivir. In vivo, remdesivir is predominantly metabolized by hydrolase. Moreover, it is also a substrate for the organic anion-transporting polypeptide 1B1 (OATP1B1) transporter and the P-glycoprotein (P-gp) transporter . The OATP1B1 is encoded by the solute carrier organic anion transporter family member 1B1 (SLCO1B1) gene with several variants that can impact drug disposition. For example, Africans, Asians, and Caucasians have been identified with the rs2306283 c.388A > G which are associated with a decreased transporter function. Other variants exhibit a low frequency and associate with a decreased function of the transporters SLCO1B1 rs56101265, rs56061388, rs72559745, rs4149056, rs72559746, rs55901008, rs59502379 , and rs56199088 . P-gp, an efflux pump encoded by the ABCB1 gene, plays a role in viral resistance and trafficking cytokines and enveloped viruses. Despite the identification of several ABCB1 variants, only the rs1128503 c.1236C > T, the rs2032582 c.2677G > T/A , and the rs1045642 c.3435C > T are relevant in pharmacogenetics studies . CYP3A4 is the most abundant hepatic enzyme system expressed in most populations, with the expression of more than 34 allelic variants. During the inflammatory response of COVID-19, CYP3A4 presents a cytokine-mediated down-regulation via the JAK/STAT pathway, specifically through interleukin-6 (IL-6) . In severe COVID-19 patients, the use of steroid therapy with remdesivir could affect the CYP3A4 transcription, which may result in a lower therapeutic drug level of remdesivir and hence higher IL-6 . CYP2C8 displays low genetic variation, while CYP2D6 is characterized by extensive genetic variability impacting on the enzyme activity. Those with the duplication or multiplication of active alleles are associated with increased enzyme activity . There is a significant interethnic variability for the CYP2D6 variants which might cause differences in response to CYP2D6 substrates in some populations. This is observed with a higher frequency among Caucasians, East Asian populations, and duplication/multiplication of active alleles in Middle Eastern populations, and in Black African populations . The combination of CYP2D6 alleles could also anticipate the metabolic phenotype of the patients. For example, populations carrying two null alleles are considered poor metabolizers; those with one functional allele and one null allele considered intermediate metabolizers; those with two functional alleles are considered extensive metabolizers, while those with duplicated or multiplied functional alleles are considered ultra-rapid metabolizers . Generally, remdesivir has a low risk of significant genetic pharmacokinetic (PK) interactions, and theoretically, polymorphisms and variants of these genes could affect the PK of remdesivir. Therefore, no evidence recommends pharmacogenetic testing before the administration of remdesivir. Lopinavir Lopinavir is an antiretroviral protease inhibitor and is exclusively administered in combination with ritonavir. Coadministration with low-dose ritonavir significantly improves the pharmacokinetic properties and hence the activity of lopinavir against HIV-1 protease. This is mainly related to ritonavir which is considered a potent inhibitor of the CYP3A4 enzymes responsible for the extensive metabolism of lopinavir. In addition to reducing biotransformation of lopinavir, this combination also improves absorption and oral bioavailability of lopinavir by inhibiting OATP1B1 and OATP1B3 along with P-gp in the gut wall . CYP3A4 is primarily involved in lopinavir metabolism and is transported by ABCB1 and ABCB2 . On the other hand, ritonavir is metabolized by CYP2J2, CYP3A4, CYP3A5 , and CYP2D6 . Therefore, the simultaneous administration of lopinavir and ritonavir should be avoided with other drugs that are highly potent CYP3A inducers . The antiviral activity of lopinavir is produced via the inhibition of the enzyme 3-chemotrypsin-like protease (3CLpro) that has an important role in viral RNA processing and release from the host cell . In addition, lopinavir blocks a post-entry step in the replication cycle of SARS‐CoV‐2, making lopinavir a promising potential drug for COVID-19 treatment . However, WHO recommends against its use because of the lack of sufficient evidence and potential serious side effects from this combination, such as vomiting, diarrhoea, hypertriglyceridemia, and lipodystrophy . Bioavailability of this antiviral combination can be increased substantially with concurrent ingestion of fatty food. Both agents undergo extensive and rapid first-pass metabolism by hepatic cytochrome P450 (3A4 isoenzyme). With lopinavir/ritonavir 400/100 mg twice daily administration, the elimination half-life and average oral clearance of lopinavir is nearly 4–6 h and nearly 6–7 L/h, respectively. Less than 3% and 20% of the lopinavir dose is excreted unchanged in the urine and faeces, respectively. This antiviral combination has the potential to interact with wide variety of drugs or herbal products via several mechanisms, mostly involving the CYP enzymes and is contraindicated with certain drugs, such as flecainide, propafenone, astemizole, terfenadine, ergot derivatives, cisapride, pimozide, midazolam, triazolam and St. John's wort . Genetic screening of Apo lipoprotein E (APOE) and APOC3 can be initiated to reduce the risk of these complications of hypertriglyceridemia and lipodystrophy associated with ritonavir . Variations of lopinavir concentrations among populations are associated with SLCO1B1. The increased CYP3A4 activity is related to CYP3A4 polymorphism L292P (rs28371759, CYP*18B) causing an increased lopinavir metabolism. CYP3A4 polymorphism L292P is observed more in East Asian populations who are metabolizing lopinavir and ritonavir more rapidly. There are many coding single nucleotide polymorphisms (SNPs) in the ABCB1 gene . The Asian population has a significantly different variant allele frequency of 3435C > T than the African and Caucasian populations . Many non-synonymous polymorphisms in ABCB1 are available, such as S893T, S893A (rs2032582), N21D (rs9282564) and S400 N (rs2229109) , which can increase drug concentrations and produce more drug response via reducing the efflux of ABCB1 . African populations carry the highest frequency of S893A , reaching up to 90%. East Asians carry S893T and Europeans carry N21D . These patient populations may have more responsiveness to the drugs transported by ABCB1 , as shown in . Favipiravir Favipiravir is a prodrug purine nucleic acid analogue and potent RdRp inhibitor , that has been licensed as an antiviral medication used to treat influenza since 2014 . Favipiravir selectively inhibits the viral RNA dependent RNA polymerase or causes lethal mutagenesis upon incorporation into the viral RNA. Favipiravir inhibits SARS‐CoV‐2 replication in Vero E6 cells. Emergency approval of favipiravir in adult patients with COVID-19 was announced by the National Medical Product Administration (NMDA) in China (Du and Chen, 2020), and is still in use in various countries as a potential treatment for COVID-19 due to its efficacy against different viral infections. However, this antiviral agent is neither approved by the FDA nor recommended by the WHO . Favipiravir is available in oral form with excellent bioavailability. It is metabolized in the liver by aldehyde oxidase and partially by xanthine oxidase. The efficacy of substrates of aldehyde oxidase, such as azathioprine or allopurinol, is associated with variants of aldehyde oxidase. Hence, allelic variants of aldehyde oxidase and xanthine oxidase genes should be considered in therapy. Overall safety profile is good with some concerns of gastrointestinal side effects and hyperuricemia . Molnupiravir Molnupiravir is the 5′-isobutyrate prodrug of the antiviral ribonucleoside analogue β-D-N4-hydroxycytidine (NHC). Molnupiravir was the first oral antiviral drug approved by the United Kingdom Medicines and Healthcare Products Regulatory Agency and by the FDA for the emergency treatment of COVID-19 in adults. However, molnupiravir's safety profile is still under invisatigation and clinical trilas to detect clinically important side effects regardless its safe use in patients with hepatic and renal impairment. Moreover, benefit of treatment has not been observed when treatment started after COVID-19 hospitalization. Therefore, it is neither indicated for use in younger patients than 18 years of age because of its effect on bone and cartilage growth nor for the pre- or post-exposure prevention of COVID-19 . Molnupiravir is an inhibitor of RNA-dependent RNA polymerase (RdRp) that plays an important role in the replication of SARS-COV-2 . The cellular uptake of circulating NHC is involved in the endogenous phosphorylation of pyrimidine nucleoside pathways to form an active ribonucleoside triphosphate (NHC-TP), which binds to the genome of viral RNA (guanosine or adenosine), and then can be substituted to either cytidine or uridine triphosphate, by viral RNA polymerase. This in turn results in the accumulation of many mutations in the viral genome, leading to both viral suppression and inhibition inside the tissues. Molnupiravir is hydrolyzed to NHC by esterases CES1 and CES2 . The conversion of molnupiravir to NHC could be inhibited by genetic variations in the genes encoding esterases CES1 and CES2 . Molnupiravir is a weak substrate of the human nucleoside transporter (CNT1), while NHC is a substrate of the human nucleoside transporters CNT1, CNT2, CNT3, and ENT2. Some patients had no pharmacological response to molnupiravir due to genetic variations in the genes encoding CNT1, CNT2, CNT3 and ENT2 . Being a prodrug , molnupiravir is a 5′-isobutyrate ester is cleaved by esterases present in the gastrointestinal tissues of the intestine and liver during absorption and hepatic first pass, delivering the ribonucleoside metabolite NHC into systemic circulation. This results in only very low levels of molnupiravir detected in the plasma. Distribution of molnupiravir, NHC, and NHC-TP is quantified in lung, spleen, kidney, liver, heart and brain. No data is reported for distribution to other tissues, such as bone and cartilage, the GI tract or reproductive tissues. Since molnupiravir is not stable in plasma, the plasma protein binding of molnupiravir was not assessed. Regarding the metabolisim and pharmacogenomic situation, sufficient and reliable data is not published yet of the phase studies and more information is needed to comment on this part . Oseltamivir Oseltamivir is an inactive pro-drug antiviral drug indicated for the treatment of influenza A and B infections via peptide transporter 1 (PepT1) after being converted to the active metabolite oseltamivir carboxylate through the hepatic enzyme carboxylesterase 1 (CES1). This active metabolite inhibits viral neuraminidase (NEU2), thereby blocking progeny viral release from infected cells and viral entry into uninfected cells. However, this antiviral drug can be eliminated before being activated as it is a substrate of P-gp. Oseltamivir effectiveness in the treatment of COVID-19 is still under evaluation by several clinical trials . A liquid formulation of oseltamivir (2 mg/kg twice daily for 5 days) is effective in the treatment of children with influenza, and may be used in high-risk populations, such as the elderly or those with chronic cardiac or respiratory disease. Furthermore, short term administration oral oseltamivir at a dose of 75 mg once or twice daily for 6 weeks significantly prevented the development of naturally acquired influenza by >70% in unvaccinated healthy adults when administered within 48 h of symptom onset in the infected person. The drug also effective when used adjunctively in previously vaccinated high-risk elderly patients. Oseltamivir is generally well tolerated . After oral administration, oseltamivir is rapidly absorbed from the gastrointestinal tract and its absorption is not significantly affected by the presence of food. It has a high oral bioavailability reaching up to 79% and the plasma concentrations are detected within 30 min of an oral oseltamivir dose. It is then extensively metabolized, mainly by hepatic esterases, to its only active metabolite oseltamivir carboxylate . This metabolite is rapidly distributed to the primary site of influenza virus replication (surface epithelial cells of the respiratory tract) after oral oseltamivir administration. Oseltamivir carboxylate is renally eliminated by a first-order process, primarily by glomerular filtration and renal tubular secretion and has a terminal elimination half-life of 6–10 h. Its clearance is reduced in patients with severe renal dysfunction. Furthermore, the clearance is slower in the elderly (≥65 years) and faster in children (≤12 years) than in adults. There are no clinically significant drug interactions detected with oseltamivir . Clinical and pharmacogenetic studies significantly reported inter-individual variability in the pharmacokinetics and the occurrence of adverse drug reactions (ADRs) to oseltamivir related to the CES1 genetic variants . G143E variants were reported to be 3.7% in Whites, 4.3% in Blacks and 2.0% in Hispanic populations . Variations in plasma concentration-time curve of oseltamivir was also found associated with the rs71647871 p. Gly143Glu (Lim et al., 2009). The rs200707504 c.662A > G in CES1 was associated with a decreased antiviral drug bioactivation . Oseltamivir ADRs were found associated with variants in ABCB1, CES1, NEU2 , and SLC15A1 , the gene encoding the transporter PepT1. In this regard, the T allele was predominantly related to the occurrence of ADRs, in contrast to the C allele, which was not associated with the reporting of ADRs . Both CES1 and ABCB1 genetic variants are considered valid biomarkers for the prediction and optimization of oseltamivir pharmacotherapy . Atazanavir Atazanavir is a potent protease inhibitor (PI), approved as a component of antiretroviral therapy (ART) regimens administered once daily for the treatment of patients with HIV-1 infection. It inhibits SARS‐CoV2 replication in both Vero cells and human epithelial pulmonary cells. Atazanavir is rapidly absorbed with 60–80% oral bioavailability and reaching peak plasma concentrations (Cmax) after 2–3 h and is metabolized by CYP3A. Atazanavir is ≥ 86% protein bound being 86% bind to albumin and 89% bind to α1-acid glycoprotein . It is extensively metabolized in the liver by CYP3A4 to oxygenated metabolites. After a single 400 mg dose, 79% of atazanavir is mainly eliminated via the biliary route, with only minor elimination via the kidneys (13%). Unchanged drug accounted for 20% and 7% of these quantities . Atazanavir is an inhibitor of CYP3A and UDP glucuronosyl transferase 1A (UGT1A). The drug carries several polymorphisms in UGT1A1 , including a variable dinucleotide (TA) repeat within the gene promoter region . High risk hyperbilirubinemia has been associated with the homozygotes, while intermediate risk hyperbilirubinemia has been associated with the heterozygotes . Therefore, pharmacogenomic counselling before initiating atazanavir therapy is recommended to avoid the development of hyperbilirubinemia . A partial metabolism of atazanavir is mediated by the P-gp efflux pump encoded by the multidrug resistance 1 (MDR1) gene. This increases plasma concentrations of atazanavir in the presence of 3435 variable genetic homozygosis C/C, predisposing the patients to hyperbilirubinemia and severe jaundice. On the other hand, the risk of early onset lipodystrophy is related to polymorphism 238 G > A , while the risk of dyslipidaemia is associated with APOA5 gene polymorphisms (1131 T > C and 64 G > C), APOC3 (482 C > T, 455 C > T, 3238 C > G) , and ABCA1 (2962 A > G) and APOE (2 and 3 haplotypes) . Nirmatrelvir Nirmatrelvir is an oral antiviral medication that inhibits SARS-CoV-2 main protease (M pro ). M pro is the focus of extensive structure-based drug design efforts, which are mostly covalent inhibitors targeting the catalytic cysteine, thereby impairing the virus's ability to reproduce itself . This cysteine is responsible for the activity of the 3CL PRO of SARS-CoV-2 and potentially other members of the coronavirus family. 3CL PRO , also known as the main protease or non-structural protein 5. It is responsible for cleaving polyproteins 1a and 1 ab. These polyproteins contain the 3CL PRO itself, a papain-like (PL) cysteine protease, and 14 other non-structural proteins. Without the activity of the 3CL PRO , non-structural proteins (including proteases) cannot be released to perform their functions, inhibiting viral replication. Nirmatrelvir is co-administered orally with a low dose of ritonavir (PAXLOVID™) for the prevention of COVID-19. It reduces the risk of hospitalization or death by 89% compared to placebo in non-hospitalized high-risk adults with COVID-19 . PAXLOVID treatment should be initiated as soon as possible after diagnosis of COVID-19 and within 5 days of symptom onset. The drug is administered orally with or without food at dosage 300 mg nirmatrelvir (two 150 mg tablets) with 100 mg ritonavir (one 100 mg tablet), with all three tablets taken together twice daily for 5 days . In vitro, studies suggest that CYP3A4 has a significant role in the metabolism of nirmatrelvir, which gives the chance to improve the efficacy by co-dosing with a potent CYP3A4 inhibitor like ritonavir . However, the use of ritonavir poses a significant risk of drug interaction due to its potent inhibition profile; patients and clinicians should consult the prescribing information for nirmatrelvir and ritonavir to evaluate any potential for drug interaction with existing medications prior to the initiation of nirmatrelvir. Ritonavir inhibits not only CYP isoenzyme family members like CYP3A4, CYP2D6, CYP2C19, CYP2C8, and CYP2C9 , but also ABCB5 P-glycoprotein and cellular transport mechanisms via the efflux pump, breast cancer resistance protein ABCG2, organic anion transporting polypeptides (hOCT1) in the liver, and multidrug and toxin extrusion protein in renal drug handling (MATE1). On the other hand, it induces CYP1A2, CYP2B6, CYP2C9, CYP2C19 , and the UGT family. Because of the potential for the deadly adverse reactions upon its inhibitions and inductions, Nirmatrelvir is contraindicated with drugs that are highly dependent on CYP3A for clearance. The concomitant use of ritonavir with statins, steroids, sedative hypnotics, anticoagulants, and antiarrhythmic therapies is contraindicated . Dosage adjustment is needed in patients with moderate renal impairment, while the drug is not recommended in patients with severe renal impairment. No dosage adjustment is needed in patients with mild or moderate) hepatic impairment. No pharmacokinetic or safety data are available regarding the use of nirmatrelvir or ritonavir in subjects with severe hepatic impairment; therefore, nirmatrelvir is not recommended for use in patients with severe hepatic impairment . Biological agents 3.1 Tocilizumab Tocilizumab was approved by the FDA in 2010 as a novel humanized monoclonal antibody that acts as an interleukin (IL)-6 receptor antagonist and prevents IL-6 signal transduction to inflammatory mediators of the B and T cells for the treatment of cytokine release syndrome, systemic juvenile idiopathic arthritis, giant cell arteritis, and rheumatoid arthritis . Intravenous tocilizumab 8 mg/kg is effective and generally most treatment-emergent adverse events were mild to moderate in intensity and well tolerated. Tocilizumab has a long and concentration dependent half-life, allowing monthly administration. Tocilizumab undergoes biphasic elimination; total clearance is concentration dependent and is the sum of linear and non-linear clearance. Age, sex and ethnicity did not affect the pharmacokinetics of tocilizumab . COVID-19 patients experience severe inflammatory responses, particularly in the lungs, because T‐lymphocytes and mononuclear macrophages are activated, inducing the release of inflammatory cytokines, such as IL‐6, which bind to the IL‐6 receptor on the target cells, causing the cytokine storm . Therefore, the use of tocilizumab has resulted in better outcomes by blocking the IL-6 receptor in patients with severe COVID-19 pneumonia. Accordingly, the WHO recommends the use of tocilizumab for severe or critical patients with COVID-19 . Tocilizumab blocks the downregulation of CYP3A4 caused by IL6. The FCGR3A genotype I the only genetic variant potentially affecting the pharmacokinetics of tocilizumab and showed a higher response to drug treatment. On the other hand, genetic polymorphisms in the IL6R gene may affect the intracellular signaling pathway of the IL-6 receptor bound to tocilizumab. Patients with IL6R rs4 329505 CC and CT genotypes may have a decreased response to tocilizumab as compared with patients with the TT genotype. Furthermore, the rs1 2083537 AA genotype was associated with a decreased response to tocilizumab as compared with the AG genotype. On the other hand, the rs1 1265618 CC genotype was associated with an increased response to tocilizumab as compared with the CT and TT genotypes . Such genetic variation may predict and affect the therapeutic response of tocilizumab in COVID-19 patients, as shown in . 3.2 Casirivimab and imdevimab Casirivimab and imdevimab are a combined neutralizing immunoglobulin gamma 1 (IgG1) human monoclonal antibodies targeting the receptor binding domain of the spike protein of SARS-COV-2 and blocking its binding to human ACE2 receptors . The WHO recommends a combined administration of casirivimab and imdevimab for non-severe patients at the highest risk of hospitalization or severe patients who have seronegative status, as shown in . In addition, the FDA issued a EUA for casirivimab and imdevimab for the treatment of mild to moderate COVID-19 in adults and paediatric patients (12 years of age and older, weighing at least 40 kg) with positive results of direct SARS-CoV-2 viral testing, and who are at high risk for progression to severe COVID-19, including hospitalization or death. The pharmacokinetics of casirivimab/imdevimab are linear and dose-proportional after a single intravenous (IV) administration of 300–8000 mg. A single 1200 mg intravenous dose of casirivimab/imdevimab has mean maximum serum concentrations (Cmax) of 182.7 and 181.7 mg/L and mean concentrations 28 days after administration of 37.9 and 31.0 mg/L. A single 1200 mg dose subcutaneous administration of casirivimab/imdevimab achieves mean Cmax values of 52.2 and 49.2 mg/L and the mean C28 values of 30.5 and 25.9 mg/L. The intravenous and subcutaneous repeat-dose regimens of this combination achieve serum trough concentrations similar to the mean C28 values seen after a single 1200 mg subcutaneous dose . A single intravenous dose of casirivimab/imdevimab 1200 mg have mean half-lives of 31.2 and 27.3 days, while a single dose of subcutaneous casirivimab/imdevimab 1200 mg, casirivimab had mean half-lives of 30.2 and 32.4 days and imdevimab of 26.5 and 27.0 days. The estimated total volume of distribution of casirivimab is 7.16 L and that of imdevimab is 7.43 L. Both antibodies are degraded into small peptides and amino acids and not metabolized by CYP450 enzymes or be excreted renally or hepatically to any significant extent . Patient characteristics, including age, sex, bodyweight, race, hepatic or renal impairment, do not appear to impact casirivimab or imdevimab exposure to any clinically relevant extent. It is not metabolized by CYP450 enzymes or renally excreted, therefoe, no drug-drug interactions between sotrovimab and drugs that are substrates, inducers or inhibitors of CYP450 enzymes or that are renally excreted . Casirivimab/imdevimab, administered via intravenous infusion or subcutaneous injection, was generally well tolerated in clinical studies. There are no risks associated with genetic polymorphisms . 3.3 Tixagevimab and cilgavimab The recently FDA approved monoclonal antibodies (mAbs) are a combination of tixagevimab and cilgavimab can reduce the risk of COVID-19 hospitalization or death in high-risk patients, as shown in . As this combination worked to neutralise all previous SARS-CoV-2 variants, these are long-acting human immunoglobulin G1 (IgG1κ) mAbs that are specifically bind to different, non-overlapping sites on the spike protein of the virus and block the SARS-CoV-2 virus’ attachment and entry into human cells. The combination is only authorized for adult patients who are not currently infected with the novel coronavirus and have not recently been exposed to an infected individuals . The recommended dose is 300 mg, consisting of 150 mg of tixagevimab and 150 mg of cilgavimab administered as separate sequential intramuscular (IM) injections at different injection sites in two different muscles, preferably in the gluteal muscles. A higher 600 mg dose, consisting of 300 mg of tixagevimab and 300 mg of cilgavimab, may be more appropriate for some SARS-CoV-2 variants . The pharmacokinetics of tixagevimab and cilgavimab are comparable, linear and dose-proportional after a single intravenous (IV) administration. After a single IM administration of this combination in a phase 1 trial, the mean maximum concentrations (Cmax) of tixagevimab and cilgavimab (16.5 and 15.3 μg/mL) were reached at a median Tmax of 14 days and reaching a bioavialability more than 65% for both mAbs. The central volume of distribution for tixagevimab and cilgavimab was 2.72 and 2.48 L, respectively, and the peripheral volume of distribution was 2.64 L and 2.57 L. The estimated time to reach the minimum protective serum concentration of 2.2 μg/mL in the gluteal region is 6 h . The combination is associated with hypersensitivity reactions with some reports of serious cardiac events. Tixagevimab and cilgavimab are expected to be degraded into small peptides and component amino acids via catabolic pathways in the same manner as endogenous IgG antibodies, while not likely to undergo renal excretion . The pharmacogenomics properties are still under invesatigation to detect risk associated with genetic polymorphisms. 3.4 Bamlanivimab-etesevimab Bamlanivimab and etesevimab are another mAbs combination approved by the FDA as emergency treatment for mild to moderate COVID-19 including patients with a body mass index (BMI) of ≥35 kg/m 2 , chronic kidney disease, diabetes mellitus, immunosuppressive disease, elders age ≥65, and those with other high-risk comorbidities, as shown in . Both of bamlanivimab and etesevimab are recombinant neutralizing human IgG1κ mAbs to the spike protein of SARS-CoV-2 and are unmodified in the Fc region and block spike protein attachment to the human ACE2 receptor. The combination bind to different overlapping epitopes in the receptor binding domain (RBD) of the S-protein . Pharmacokinetic profiles of these mAbs are linear and dose proportional following iv infusion with no change in their pharmacokinetics whether administered alone or together, suggesting no interaction between these two drugs. There is limited data about their distribution patterns into human or animal milk. They are not metabolized by CYP isoenzymes, but they are expected to be degraded into small peptides and component amino acids via catabolic pathways in the same manner as endogenous IgG antibodies. In addition, they are not eliminated by renal excretion with mean apparent terminal elimination half-life is 17.6 days for bamlanivimab and 25.1 days for etesevimab. The pharmacogenomic properties are still under invesatigation to detect risk associated with genetic polymorphisms . 3.5 Sotrovimab Sotrovimab is a recombinant human monoclonal immunoglobulin G1 antibody targeted against the SARS-CoV-2 and engineered to enhance distribution in the lungs and to extend antibody half-life. It is a recombinant human IgG1-kappa mAb that binds to a conserved epitope on the spike protein receptor binding domain of SARS-CoV-2. Sotrovimab is considered an alternative to casirivimab-imdevimab, a mAb approved by the FDA, EU and WHO as emergency treatment for mild to moderate COVID-19 in adolescents (aged ≥12 years and weighing ≥40 kg) who do not require oxygen supplementation and who are at high risk of progressing to severe COVID-19 , as shown in . The geometric mean Cmax following a 1 h iv infusion is 117.6 μg/mL. The mean steady-state volume of distribution of sotrovimab was 8.1 L. The metabolism of sotrovimab is an engineered human IgG1 mAb degraded by proteolytic enzymes which are widely distributed in the body and not restricted to hepatic tissue. Regarding the elimination, the mean systemic clearance is 125 mL/day with a median terminal half-life of approximately 49 days. Patient characteristics, including age, hepatic and kidney impairments, do not appear to have any clinically significant impact on the pharmacokinetics or elimination of sotrovimab. It is not metabolized by CYP450 enzymes or renally excreted, therefoe, no drug-drug interactions between sotrovimab and drugs that are substrates, inducers or inhibitors of CYP450 enzymes or that are renally excreted . The pharmacogenomic properties are still under investigation to detect risk associated with genetic polymorphisms . 3.6 Anakinra Anakinra is a recombinant nonglycosylated form of human IL-1 receptor antagonist (IL-1ra) which designed specifically to modify the biological immune response of IL-1 and approved by the FDA in 2001 for the treatment of rheumatoid arthritis . It is manufactured using recombinant DNA technology, that competitively inhibits IL-1α and IL-1β from binding to the IL-1 type I receptor. The clearance of anakinra is like that of creatinine and is directly related to renal function and is reduced in patients with renal impairment. Therefore, dosage modification should be considered only for individuals with moderate to severe renal impairment. However, no dosage adjustment is required in patients with hepatic impairment. To date, no pharmacokinetic interactions have been reported between anakinra and drugs likely to be co-administered . The anti-inflammatory effect during the COVID-19-induced cytokine storm is the primary reason for its repurposing. In some studies, anakinra was found effective in reducing clinical signs of hyper-inflammation in critically ill COVID-19 patients. However, more clinical trial studies need to be done to reach conclusive evidence that supports its efficacy . Anakinra is only recommended by EMA for adult COVID-19 patients with pneumonia requiring supplemental oxygen and who are at risk of developing severe respiratory failure . Although anakinra is not metabolized by Phase I or Phase II enzymes, IL-1 genes are responsible for the response to anakinra treatment. The G4845T (rs17651) T allele was found to alter IL-1α production, shifting the responsiveness to anakinra . The response rate of patients carrying a rare allele of the gene is significantly higher than those who do not . 3.7 Interferons Interferons (IFNs) are a group of signaling glycoproteins known as cytokines that are capable of interfering with viral replication, expressed rapidly during the process of a viral infection. Therefore, they form an important part of a very early and virus-unspecific host defense mechanism against multiple viruses. They are divided into type I interferons (several interferon alpha subtypes, interferon beta, interferon epsilon, interferon kappa, interferon omega), type II interferons (interferon gamma) and type III interferons (several interferon lambda subtypes). Interferons are licensed for their potential role against both DNA/RNA viruses. High doses of interferons should be administered to achieve high serum levels which are probably essential for antiviral treatment. Moreover, interferons are formulated in pegylated forms to prolong the elimination half-life and thus to decrease the necessary administration frequency . Pharmacogenomics variables for interferons are not well-defined. However, studies suggest that polymorphism on the interferon-induced transmembrane protein-3 (IFITM3) gene, particularly SNP rs12252 , is associated with more severe COVID-19 prognosis in an age-dependent way, which is more prevalent in the Asian population, as shown in . The IFITM3 gene encodes an immune effector protein critical to viral restriction and acts to restrict membrane fusion . IFNs have been suggested as a potential treatment for COVID-19 because of their antiviral properties . Nevertheless, since most of the studies conducted on interferon efficacy in managing COVID-19 were of low quality and did not reach a conclusive result, the recommendation currently is against using IFNs for severe cases of COVID-19 . Tocilizumab Tocilizumab was approved by the FDA in 2010 as a novel humanized monoclonal antibody that acts as an interleukin (IL)-6 receptor antagonist and prevents IL-6 signal transduction to inflammatory mediators of the B and T cells for the treatment of cytokine release syndrome, systemic juvenile idiopathic arthritis, giant cell arteritis, and rheumatoid arthritis . Intravenous tocilizumab 8 mg/kg is effective and generally most treatment-emergent adverse events were mild to moderate in intensity and well tolerated. Tocilizumab has a long and concentration dependent half-life, allowing monthly administration. Tocilizumab undergoes biphasic elimination; total clearance is concentration dependent and is the sum of linear and non-linear clearance. Age, sex and ethnicity did not affect the pharmacokinetics of tocilizumab . COVID-19 patients experience severe inflammatory responses, particularly in the lungs, because T‐lymphocytes and mononuclear macrophages are activated, inducing the release of inflammatory cytokines, such as IL‐6, which bind to the IL‐6 receptor on the target cells, causing the cytokine storm . Therefore, the use of tocilizumab has resulted in better outcomes by blocking the IL-6 receptor in patients with severe COVID-19 pneumonia. Accordingly, the WHO recommends the use of tocilizumab for severe or critical patients with COVID-19 . Tocilizumab blocks the downregulation of CYP3A4 caused by IL6. The FCGR3A genotype I the only genetic variant potentially affecting the pharmacokinetics of tocilizumab and showed a higher response to drug treatment. On the other hand, genetic polymorphisms in the IL6R gene may affect the intracellular signaling pathway of the IL-6 receptor bound to tocilizumab. Patients with IL6R rs4 329505 CC and CT genotypes may have a decreased response to tocilizumab as compared with patients with the TT genotype. Furthermore, the rs1 2083537 AA genotype was associated with a decreased response to tocilizumab as compared with the AG genotype. On the other hand, the rs1 1265618 CC genotype was associated with an increased response to tocilizumab as compared with the CT and TT genotypes . Such genetic variation may predict and affect the therapeutic response of tocilizumab in COVID-19 patients, as shown in . Casirivimab and imdevimab Casirivimab and imdevimab are a combined neutralizing immunoglobulin gamma 1 (IgG1) human monoclonal antibodies targeting the receptor binding domain of the spike protein of SARS-COV-2 and blocking its binding to human ACE2 receptors . The WHO recommends a combined administration of casirivimab and imdevimab for non-severe patients at the highest risk of hospitalization or severe patients who have seronegative status, as shown in . In addition, the FDA issued a EUA for casirivimab and imdevimab for the treatment of mild to moderate COVID-19 in adults and paediatric patients (12 years of age and older, weighing at least 40 kg) with positive results of direct SARS-CoV-2 viral testing, and who are at high risk for progression to severe COVID-19, including hospitalization or death. The pharmacokinetics of casirivimab/imdevimab are linear and dose-proportional after a single intravenous (IV) administration of 300–8000 mg. A single 1200 mg intravenous dose of casirivimab/imdevimab has mean maximum serum concentrations (Cmax) of 182.7 and 181.7 mg/L and mean concentrations 28 days after administration of 37.9 and 31.0 mg/L. A single 1200 mg dose subcutaneous administration of casirivimab/imdevimab achieves mean Cmax values of 52.2 and 49.2 mg/L and the mean C28 values of 30.5 and 25.9 mg/L. The intravenous and subcutaneous repeat-dose regimens of this combination achieve serum trough concentrations similar to the mean C28 values seen after a single 1200 mg subcutaneous dose . A single intravenous dose of casirivimab/imdevimab 1200 mg have mean half-lives of 31.2 and 27.3 days, while a single dose of subcutaneous casirivimab/imdevimab 1200 mg, casirivimab had mean half-lives of 30.2 and 32.4 days and imdevimab of 26.5 and 27.0 days. The estimated total volume of distribution of casirivimab is 7.16 L and that of imdevimab is 7.43 L. Both antibodies are degraded into small peptides and amino acids and not metabolized by CYP450 enzymes or be excreted renally or hepatically to any significant extent . Patient characteristics, including age, sex, bodyweight, race, hepatic or renal impairment, do not appear to impact casirivimab or imdevimab exposure to any clinically relevant extent. It is not metabolized by CYP450 enzymes or renally excreted, therefoe, no drug-drug interactions between sotrovimab and drugs that are substrates, inducers or inhibitors of CYP450 enzymes or that are renally excreted . Casirivimab/imdevimab, administered via intravenous infusion or subcutaneous injection, was generally well tolerated in clinical studies. There are no risks associated with genetic polymorphisms . Tixagevimab and cilgavimab The recently FDA approved monoclonal antibodies (mAbs) are a combination of tixagevimab and cilgavimab can reduce the risk of COVID-19 hospitalization or death in high-risk patients, as shown in . As this combination worked to neutralise all previous SARS-CoV-2 variants, these are long-acting human immunoglobulin G1 (IgG1κ) mAbs that are specifically bind to different, non-overlapping sites on the spike protein of the virus and block the SARS-CoV-2 virus’ attachment and entry into human cells. The combination is only authorized for adult patients who are not currently infected with the novel coronavirus and have not recently been exposed to an infected individuals . The recommended dose is 300 mg, consisting of 150 mg of tixagevimab and 150 mg of cilgavimab administered as separate sequential intramuscular (IM) injections at different injection sites in two different muscles, preferably in the gluteal muscles. A higher 600 mg dose, consisting of 300 mg of tixagevimab and 300 mg of cilgavimab, may be more appropriate for some SARS-CoV-2 variants . The pharmacokinetics of tixagevimab and cilgavimab are comparable, linear and dose-proportional after a single intravenous (IV) administration. After a single IM administration of this combination in a phase 1 trial, the mean maximum concentrations (Cmax) of tixagevimab and cilgavimab (16.5 and 15.3 μg/mL) were reached at a median Tmax of 14 days and reaching a bioavialability more than 65% for both mAbs. The central volume of distribution for tixagevimab and cilgavimab was 2.72 and 2.48 L, respectively, and the peripheral volume of distribution was 2.64 L and 2.57 L. The estimated time to reach the minimum protective serum concentration of 2.2 μg/mL in the gluteal region is 6 h . The combination is associated with hypersensitivity reactions with some reports of serious cardiac events. Tixagevimab and cilgavimab are expected to be degraded into small peptides and component amino acids via catabolic pathways in the same manner as endogenous IgG antibodies, while not likely to undergo renal excretion . The pharmacogenomics properties are still under invesatigation to detect risk associated with genetic polymorphisms. Bamlanivimab-etesevimab Bamlanivimab and etesevimab are another mAbs combination approved by the FDA as emergency treatment for mild to moderate COVID-19 including patients with a body mass index (BMI) of ≥35 kg/m 2 , chronic kidney disease, diabetes mellitus, immunosuppressive disease, elders age ≥65, and those with other high-risk comorbidities, as shown in . Both of bamlanivimab and etesevimab are recombinant neutralizing human IgG1κ mAbs to the spike protein of SARS-CoV-2 and are unmodified in the Fc region and block spike protein attachment to the human ACE2 receptor. The combination bind to different overlapping epitopes in the receptor binding domain (RBD) of the S-protein . Pharmacokinetic profiles of these mAbs are linear and dose proportional following iv infusion with no change in their pharmacokinetics whether administered alone or together, suggesting no interaction between these two drugs. There is limited data about their distribution patterns into human or animal milk. They are not metabolized by CYP isoenzymes, but they are expected to be degraded into small peptides and component amino acids via catabolic pathways in the same manner as endogenous IgG antibodies. In addition, they are not eliminated by renal excretion with mean apparent terminal elimination half-life is 17.6 days for bamlanivimab and 25.1 days for etesevimab. The pharmacogenomic properties are still under invesatigation to detect risk associated with genetic polymorphisms . Sotrovimab Sotrovimab is a recombinant human monoclonal immunoglobulin G1 antibody targeted against the SARS-CoV-2 and engineered to enhance distribution in the lungs and to extend antibody half-life. It is a recombinant human IgG1-kappa mAb that binds to a conserved epitope on the spike protein receptor binding domain of SARS-CoV-2. Sotrovimab is considered an alternative to casirivimab-imdevimab, a mAb approved by the FDA, EU and WHO as emergency treatment for mild to moderate COVID-19 in adolescents (aged ≥12 years and weighing ≥40 kg) who do not require oxygen supplementation and who are at high risk of progressing to severe COVID-19 , as shown in . The geometric mean Cmax following a 1 h iv infusion is 117.6 μg/mL. The mean steady-state volume of distribution of sotrovimab was 8.1 L. The metabolism of sotrovimab is an engineered human IgG1 mAb degraded by proteolytic enzymes which are widely distributed in the body and not restricted to hepatic tissue. Regarding the elimination, the mean systemic clearance is 125 mL/day with a median terminal half-life of approximately 49 days. Patient characteristics, including age, hepatic and kidney impairments, do not appear to have any clinically significant impact on the pharmacokinetics or elimination of sotrovimab. It is not metabolized by CYP450 enzymes or renally excreted, therefoe, no drug-drug interactions between sotrovimab and drugs that are substrates, inducers or inhibitors of CYP450 enzymes or that are renally excreted . The pharmacogenomic properties are still under investigation to detect risk associated with genetic polymorphisms . Anakinra Anakinra is a recombinant nonglycosylated form of human IL-1 receptor antagonist (IL-1ra) which designed specifically to modify the biological immune response of IL-1 and approved by the FDA in 2001 for the treatment of rheumatoid arthritis . It is manufactured using recombinant DNA technology, that competitively inhibits IL-1α and IL-1β from binding to the IL-1 type I receptor. The clearance of anakinra is like that of creatinine and is directly related to renal function and is reduced in patients with renal impairment. Therefore, dosage modification should be considered only for individuals with moderate to severe renal impairment. However, no dosage adjustment is required in patients with hepatic impairment. To date, no pharmacokinetic interactions have been reported between anakinra and drugs likely to be co-administered . The anti-inflammatory effect during the COVID-19-induced cytokine storm is the primary reason for its repurposing. In some studies, anakinra was found effective in reducing clinical signs of hyper-inflammation in critically ill COVID-19 patients. However, more clinical trial studies need to be done to reach conclusive evidence that supports its efficacy . Anakinra is only recommended by EMA for adult COVID-19 patients with pneumonia requiring supplemental oxygen and who are at risk of developing severe respiratory failure . Although anakinra is not metabolized by Phase I or Phase II enzymes, IL-1 genes are responsible for the response to anakinra treatment. The G4845T (rs17651) T allele was found to alter IL-1α production, shifting the responsiveness to anakinra . The response rate of patients carrying a rare allele of the gene is significantly higher than those who do not . Interferons Interferons (IFNs) are a group of signaling glycoproteins known as cytokines that are capable of interfering with viral replication, expressed rapidly during the process of a viral infection. Therefore, they form an important part of a very early and virus-unspecific host defense mechanism against multiple viruses. They are divided into type I interferons (several interferon alpha subtypes, interferon beta, interferon epsilon, interferon kappa, interferon omega), type II interferons (interferon gamma) and type III interferons (several interferon lambda subtypes). Interferons are licensed for their potential role against both DNA/RNA viruses. High doses of interferons should be administered to achieve high serum levels which are probably essential for antiviral treatment. Moreover, interferons are formulated in pegylated forms to prolong the elimination half-life and thus to decrease the necessary administration frequency . Pharmacogenomics variables for interferons are not well-defined. However, studies suggest that polymorphism on the interferon-induced transmembrane protein-3 (IFITM3) gene, particularly SNP rs12252 , is associated with more severe COVID-19 prognosis in an age-dependent way, which is more prevalent in the Asian population, as shown in . The IFITM3 gene encodes an immune effector protein critical to viral restriction and acts to restrict membrane fusion . IFNs have been suggested as a potential treatment for COVID-19 because of their antiviral properties . Nevertheless, since most of the studies conducted on interferon efficacy in managing COVID-19 were of low quality and did not reach a conclusive result, the recommendation currently is against using IFNs for severe cases of COVID-19 . Anti-inflammatory agents 4.1 Dexamethasone Dexamethasone is a glucocorticosteroid used in the treatment of a wide variety of clinical disease conditions for its potent anti-inflammatory and immunosuppressive effects by suppressing cytokine release and inhibiting lung infiltration by neutrophils and other leukocytes. Dexamethasone decreases vasodilation, permeability of capillaries and leukocyte migration to sites of tissue inflammation by binding to the specific glucocorticoid receptors, such as NR3C1 and NR3C2 , which start a series of changes in gene expression . Systemic dexamethasone blunted COVID-19-induced systemic inflammatory and cytokine responses that can lead to lung injury and multisystem organ dysfunction. Currently, the WHO strongly recommends the use of dexamethasone orally or intravenously only for hospitalized patients with severe and critical COVID-19 who need either mechanical ventilation or supplemental oxygen. This is also recommended by the NIH and EMA for paediatric patients . P-gp is the main substrate for dexamethasone, while it has a relatively low hepatic extraction, it is majorly metabolized by the cytochrome P450 enzymatic system, primarily the CYP3A4 isoform and, to a lesser extent, CYP3A5 . Dexamethasone is hydroxylated to 6α- and 6β-hydroxydexamethasone and is converted to 11-dehydrodexamethasone by the action of corticosteroid 11-beta-dehydrogenase isozyme 2, which can be reconverted by corticosteroid 11-beta-dehydrogenase isozyme 1 . Thus, CYP3A4 is highly polymorphic, and the genetic variations and co-treatment with CYP3A4 inhibitors can modulate gene function and may affect the pharmacokinetics of dexamethasone and increase its risk of systemic side effects . Approximately 108 polymorphisms have been identified in the glucocorticoid receptors- NR3C1 gene. The minor allele frequency of nine nonsynonymous SNPs and four synonymous SNPs was more than 5% among these polymorphisms . On the other hand, thirteen SNPs have been identified across different populations and were clinically associated with a dexamethasone response. Among these variations, rs2032582 and rs1045642 in the ABCB1 gene show the highest frequency of risk alleles in different populations of the genome aggregation database . The rs6190 (ER22/ 23 EK ), rs56149945 (N363S), rs41423247 (BclI) and rs6198 (9beta) are the most common four polymorphisms in the NR3C1 gene that have been linked to dexamethasone response. Two alleles were associated with increased dexamethasone response, which were the BclI G and 363S alleles, whereas the 22/ 23 EK allele was associated with decreased drug response . Dexamethasone treatment also shows sex-specific modulation in the NR3C2 gene via the rs5522 and rs2070951 allels. The rs5522 showed a weak response in males with a homozygous AA genotype, whereas rs2070951 showed an enhanced response in females and a weak response in male G-allele carriers . Dexamethasone Dexamethasone is a glucocorticosteroid used in the treatment of a wide variety of clinical disease conditions for its potent anti-inflammatory and immunosuppressive effects by suppressing cytokine release and inhibiting lung infiltration by neutrophils and other leukocytes. Dexamethasone decreases vasodilation, permeability of capillaries and leukocyte migration to sites of tissue inflammation by binding to the specific glucocorticoid receptors, such as NR3C1 and NR3C2 , which start a series of changes in gene expression . Systemic dexamethasone blunted COVID-19-induced systemic inflammatory and cytokine responses that can lead to lung injury and multisystem organ dysfunction. Currently, the WHO strongly recommends the use of dexamethasone orally or intravenously only for hospitalized patients with severe and critical COVID-19 who need either mechanical ventilation or supplemental oxygen. This is also recommended by the NIH and EMA for paediatric patients . P-gp is the main substrate for dexamethasone, while it has a relatively low hepatic extraction, it is majorly metabolized by the cytochrome P450 enzymatic system, primarily the CYP3A4 isoform and, to a lesser extent, CYP3A5 . Dexamethasone is hydroxylated to 6α- and 6β-hydroxydexamethasone and is converted to 11-dehydrodexamethasone by the action of corticosteroid 11-beta-dehydrogenase isozyme 2, which can be reconverted by corticosteroid 11-beta-dehydrogenase isozyme 1 . Thus, CYP3A4 is highly polymorphic, and the genetic variations and co-treatment with CYP3A4 inhibitors can modulate gene function and may affect the pharmacokinetics of dexamethasone and increase its risk of systemic side effects . Approximately 108 polymorphisms have been identified in the glucocorticoid receptors- NR3C1 gene. The minor allele frequency of nine nonsynonymous SNPs and four synonymous SNPs was more than 5% among these polymorphisms . On the other hand, thirteen SNPs have been identified across different populations and were clinically associated with a dexamethasone response. Among these variations, rs2032582 and rs1045642 in the ABCB1 gene show the highest frequency of risk alleles in different populations of the genome aggregation database . The rs6190 (ER22/ 23 EK ), rs56149945 (N363S), rs41423247 (BclI) and rs6198 (9beta) are the most common four polymorphisms in the NR3C1 gene that have been linked to dexamethasone response. Two alleles were associated with increased dexamethasone response, which were the BclI G and 363S alleles, whereas the 22/ 23 EK allele was associated with decreased drug response . Dexamethasone treatment also shows sex-specific modulation in the NR3C2 gene via the rs5522 and rs2070951 allels. The rs5522 showed a weak response in males with a homozygous AA genotype, whereas rs2070951 showed an enhanced response in females and a weak response in male G-allele carriers . Other agents 5.1 Azithromycin Azithromycin is an azalide antimicrobial agent and structurally related to the macrolide erythromycin that was initially approved by the FDA in 1991 to treat respiratory infections like bronchitis and pneumonia, enteric bacterial infections, and genitourinary infections . It interferes with bacterial protein synthesis by binding to the 50S component of the 70S ribosomal subunit . Due to its structural properties, azithromycin does not interact with cytochrome P450 enzymes, but it is a substrate of the transporters P-gp and MRP2 . The interaction of azithromycin with P-gp suggests being the reason for its efficacy in the COVID-19 treatment and its synergistic effect when combined with hydroxychloroquine . Influence gene polymorphisms of azithromycin were found to be single nucleotide polymorphisms of C1236T, G2 677 T/A, and C3435T in the ABCB1 gene, which may have a considerable impact on the pharmacokinetics of azithromycin, particularly among the Chinese Han ethnic group. The prevalence of the 3435C allele is higher in African ancestry than in European populations . Azithromycin was used with a well-known safety profile in combination with chloroquine and hydroxychloroquine to treat COVID-19 when the pandemic first broke out. Azithromycin showed in vitro antiviral activity against COVID-19 through different parts of the viral cycle. In addition, it has immunomodulatory properties via the ability to down-regulate cytokine production, maintain epithelial cell integrity, and prevent lung fibrosis. However, the evidence of its usefulness was questioned and of low quality and is currently not recommended for the treatment of COVID-19 . Azithromycin Azithromycin is an azalide antimicrobial agent and structurally related to the macrolide erythromycin that was initially approved by the FDA in 1991 to treat respiratory infections like bronchitis and pneumonia, enteric bacterial infections, and genitourinary infections . It interferes with bacterial protein synthesis by binding to the 50S component of the 70S ribosomal subunit . Due to its structural properties, azithromycin does not interact with cytochrome P450 enzymes, but it is a substrate of the transporters P-gp and MRP2 . The interaction of azithromycin with P-gp suggests being the reason for its efficacy in the COVID-19 treatment and its synergistic effect when combined with hydroxychloroquine . Influence gene polymorphisms of azithromycin were found to be single nucleotide polymorphisms of C1236T, G2 677 T/A, and C3435T in the ABCB1 gene, which may have a considerable impact on the pharmacokinetics of azithromycin, particularly among the Chinese Han ethnic group. The prevalence of the 3435C allele is higher in African ancestry than in European populations . Azithromycin was used with a well-known safety profile in combination with chloroquine and hydroxychloroquine to treat COVID-19 when the pandemic first broke out. Azithromycin showed in vitro antiviral activity against COVID-19 through different parts of the viral cycle. In addition, it has immunomodulatory properties via the ability to down-regulate cytokine production, maintain epithelial cell integrity, and prevent lung fibrosis. However, the evidence of its usefulness was questioned and of low quality and is currently not recommended for the treatment of COVID-19 . Conclusions Although the genetic determinants, mechanisms, and pharmacogenetic biomarkers are established and deployed to date for many of the current repurposed drugs, no evidence-based guidelines for genetic testing and pharmacogenomic data are currently available in patients with COVID-19 to minimize possible adverse events and pharmacogenomic burden. Incorporating adequate knowledge of a pharmacogenomic approach; evaluation of further pharmacogenetic biomarkers and personalized medicine aspects should be prioritized in the prospective clinical studies. Repurposed drugs and interventions of emerging therapies of COVID-19 are hoped and hyped to achieve personalized therapeutic outcomes and veritable prospects to advance COVID-19 medicines for public health benefits. Not applicable. Not applicable. Sardas S, conceived of the study. AL-TAIE A and Büyük AS reviewed the literature, conducted the quality assessment, and extracted the data. Sardas S and AL-TAIE A reviewed the data, and drafted the manuscript. Sardas S and AL-TAIE A were the project manager and advisor on the project. All authors read and approved the final manuscript. Not applicable. Not applicable. Not applicable. |
Plain versus drug balloon and stenting in severe ischaemia of the leg (BASIL-3): open label, three arm, randomised, multicentre, phase 3 trial | 3f11deb0-1f30-4271-84c8-6aed3d96ca97 | 11848676 | Surgical Procedures, Operative[mh] | Chronic limb threatening ischaemia is the severest form of atherosclerotic peripheral arterial disease and presents with ischaemic pain with or without tissue loss. This disease represents a growing global healthcare burden, mainly owing to tobacco smoking, diabetes mellitus, and ageing populations. Unless perfusion is restored, patients are at high risk of major amputation and death, and so virtually all patients should be considered for revascularisation. To date, three published, publicly funded randomised controlled trials have compared the clinical and cost effectiveness of different revascularisation strategies in chronic limb threatening ischaemia. The UK bypass versus angioplasty in severe ischaemia of the leg (BASIL) trial suggested that patients with mostly femoropopliteal disease and a life expectancy greater than two years should be offered surgical bypass in preference to plain balloon angioplasty (PBA). The United States best endovascular versus best surgical therapy in patients with chronic limb threatening ischaemia (BEST-CLI) trial reported that in patients with an optimal vein for vein bypass (cohort one), major adverse limb events or death were lower in the vein bypass group than in those receiving endovascular treatment for similar disease. In a second cohort of patients without an optimal vein, outcomes were similar. The UK BASIL-2 trial reported that, in patients requiring an infra-popliteal, with or without a femoro-popliteal, revascularisation, a best endovascular treatment strategy resulted in better amputation free survival than vein bypass, mainly because of more deaths throughout follow-up after vein bypass. However, in patients with chronic limb threatening ischaemia selected for endovascular revascularisation in preference to surgical revascularisation, considerable uncertainty exists about the relative effectiveness of different procedures, and in particular, the role of PBA with or without bare metal stenting (BMS), drug coated balloon angioplasty (DCBA) with or without BMS, and drug eluting stenting (DES). This lack of evidence is reflected in systematic reviews, meta-analyses, and published guidelines. In 2012, the UK National Institute for Health and Care Excellence (NICE) recommended a randomised controlled trial to compare the clinical and cost effectiveness of PBA, BMS, DCBA, and DES in patients with chronic limb threatening ischaemia. This led the UK National Institute for Health and Care Research, Health Technology Assessment (NIHR HTA) programme to fund the balloon versus stenting in severe ischaemia of the leg—3 (BASIL-3) randomised controlled trial reported here. The aim of this trial was to determine which procedure resulted in better amputation free survival in patients with chronic limb threatening ischaemia who required endovascular femoro-popliteal, with or without infra-popliteal, revascularisation. Three strategies were compared: femoro-popliteal PBA±BMS, DCBA±BMS, or primary DES as the first revascularisation strategy. The BASIL-3 health economic analysis will be reported separately.
Study design and participants BASIL-3 was an open label, pragmatic, three arm, multicentre, superiority, phase 3, randomised controlled trial conducted at most (35) of the major UK NHS vascular units. Eligible participants were those who presented with chronic limb threatening ischaemia caused by atherosclerotic peripheral arterial disease and who, after a process of shared decision making, were offered and consented to a femoro-popliteal, with or without an infra-popliteal, endovascular revascularisation in preference to surgical revascularisation. As is standard UK practice, chronic limb threatening ischaemia was diagnosed on the basis of history and clinical examination, supported by selective use of haemodynamic testing and arterial imaging (one or more of duplex ultrasound, computed tomography angiography, magnetic resonance angiography, and digital subtraction angiography). To be randomised, a patient had to be assessed by a multidisciplinary team (including at least two vascular surgery or interventional radiology consultants), and deemed suitable for all three endovascular strategies. None of the patients randomised had undergone previous major (above the ankle) amputation of the trial leg. Patients presenting with intermittent claudication were not eligible for enrolment. None of the participants had a planned major (above ankle) amputation of the trial leg at the point of randomisation. Patients were excluded if they were expected to live for less than six months (pragmatic decision by randomising team) or had undergone an intervention to the target femoro-popliteal vessel within the past 12 months. Participants had to be able and willing to complete the health related quality of life and health economic questionnaires. Additionally, they needed to speak sufficient English (where translation facilities were insufficient) and to have capacity to provide written informed consent. The National Research Ethics Committee, North of Scotland, provided ethical approval on 26 August 2015 (15/NS/0070). Declaration of Helsinki and Good Clinical Practice guidelines were followed. Randomisation and masking Participants were randomly assigned (1:1:1) using a secure online system to femoro-popliteal PBA±BMS, DCBA±BMS, or DES as their first revascularisation procedure. Minimisation was used to balance assignments according to age (≤60 years, 61-70 years, 71-80 years, >80 years), sex (male, female), diabetes mellitus (yes, no), chronic kidney disease (yes, no), severity of clinical disease (ischaemic rest or night pain only, tissue loss only, both), previous (permissible) intervention to the trial leg (yes, no), target artery (superficial femoral artery only, popliteal artery only, both), intention for a hybrid (endovascular with additional surgical) procedure (yes, no), and recruiting centre. Randomisation was provided centrally by the Birmingham Clinical Trials Unit, University of Birmingham, UK. Participants, study staff, and investigators were not masked to treatment allocation. Procedures Vascular surgeons and interventional radiologists were encouraged to perform the allocated endovascular procedures using their preferred techniques and devices. Any UK licensed PBA, DCBA, BMS, or DES was permitted. The allocated intervention was to be performed within two weeks of randomisation where possible and clinically appropriate. All additional management strategies and procedures, including wound care and medical therapy, were at the discretion of the responsible clinicians and in the best interests of the patient. Participants were followed locally one month after the initial revascularisation; six, 12, and 24 months after randomisation; and then annually until the last participant had been followed for 24 months. Clinical (haemodynamic, medical treatments, clinical status) and health related quality of life data were collected during these visits. When face-to-face visits were not possible (particularly during covid-19), as much data as possible were obtained by telephone. In England and Wales, death and major amputation data were obtained until the end of follow-up from NHS Digital (the statutory custodian for health and social care data for England and Wales). Outcomes The primary outcome was amputation free survival, defined as the time to major (above ankle) amputation of the trial leg or death from any cause (whichever occurred first, time-to-event analysis). Clinical secondary outcomes included time to death from any cause (overall survival); time to major amputation of the trial leg; further major revascularisation of the trial leg; major adverse limb events (defined as major amputation of the trial leg or additional major revascularisation of the trial leg); major adverse cardiovascular events (defined as a new chronic limb threatening ischaemia affecting the non-trial leg, major amputation of the non-trial leg, myocardial infarction, stroke, or transient ischaemic attack); 30 day (after first intervention) mortality and morbidity; relief of ischaemic pain (assessed using the visual analogue scale, the Vascular Quality of Life Questionnaire tool and opiate usage); healing of tissue loss (assessed using the perfusion, extent, depth, infection, and sensation (PEDIS), and wound, ischaemic, and foot infection (WIFi) scoring systems); and changes in ankle brachial pressure index and/or toe brachial pressure index. Health related quality of life was assessed using generic (Euroqol 5D 5L, Short Form-12, ICEpop capability measure for older people, and Hospital Anxiety and Depression Scale) and disease specific (Vascular Quality of Life Questionnaire) tools. Serious adverse events were recorded up to 30 days after the first post-randomisation revascularisation. Participants were defined as adherent to the allocated trial procedure if the first revascularisation after randomisation was endovascular, the randomised class of device was used, and the randomised device was used in the superficial femoral artery, in the popliteal artery, or in both. Statistical analysis The original sample size was based on a time-to-event analysis making two key comparisons: PBA±BMS v DES, and PBA±BMS v DCBA±BMS. Recruitment was to take place over three years (20% year 1, 40% in years 2 and 3) with a minimum follow-up of two years in all participants. Based on the BASIL-1 trial, amputation free survival rates were assumed to be 0.70 in year 1, 0.64 in year 2, 0.52 in year 3, 0.46 in year 4, and 0.36 in year 5. Allowing for 5% attrition and the BASIL-1 survival estimates, 861 participants (having 342 primary outcome events) would provide 90% power to detect a reduction in amputation free survival of 40% (hazard ratio 0.60, equivalent to an absolute difference in amputation free survival of 13% at year 2, corresponding to one out of seven participants needing to benefit from DES or DCBA over PBA (number needed to treat) at the 2.5% significance level (to account for increased type I error risk associated with making two key comparisons) using the artsurv (version 1.0.7) programme in Stata (version 17.0). However, an error was identified in the implementation of the macro used to compute the original sample size, whereby a smaller effect size (than the targeted 40% relative reduction) had been applied for one of the two comparisons. This resulted in an overestimation of the sample size and number of primary outcome events needed. Using the same parameters as in the original calculation (listed above), the corrected sample size was 749 and the number of primary outcome events required was 291. Because the anticipated recruitment rates were not achieved, recruitment continued beyond year 3, and the median follow-up was longer than planned. Therefore, the number of randomised patients required to observe the target number of 291 events for 90% power was reduced. With support of the funder and independent oversight from the data monitoring committee, recruitment rates, length of follow-up, and pooled event rates over time were modelled to predict the number of participants needed to reach 291 events, with a minimum follow-up of two years. Modelling was updated approximately every six months based on emerging data. BASIL-3 closed to recruitment on 31 August 2021, with 481 participants randomised and 296 primary outcomes observed. A statistical analysis plan was specified before the analysis. All outcomes were analysed in the intention-to-treat population (all randomised participants irrespective of adherence). Differences between groups were presented with two sided 97.5% confidence intervals, adjusted for minimisation variables as fixed effects, and recruiting centre as a random effect (or as a shared frailty variable in the time-to-event analyses) when convergence was possible. P values are only presented for the primary outcome and were not corrected for multiple comparisons. Amputation free survival was analysed using a Cox proportional hazards model to generate an adjusted hazard ratio. Statistical significance of the treatment group parameter was determined through examination of the associated χ 2 statistic. Kaplan-Meier survival curves were constructed for visual presentation and absolute differences and number needed to treat values in failure probabilities (with 97.5% confidence intervals) were estimated between groups from these curves, computed at two years (as in the justification for the study sample size) and five years (the official final follow-up point). The confidence intervals for number needed to treat are presented in the notation of number needed to treat for benefit (NNTB) and for harm (NNTH) as introduced by Altman. A further prespecified analysis was conducted using a flexible parametric model with a time varying covariate for treatment to consider the effects of non-proportional hazards. The number of internal knots for the baseline hazard function and time varying covariate was determined from the model with the lowest Bayesian information criterion. A plot of hazard ratio over time (with corresponding 97.5% confidence intervals) was estimated. Overall survival was analysed as per the primary outcome (amputation free survival). Other secondary time-to-event outcomes (major amputations, major adverse limb events, major adverse cardiovascular events) were considered in a competing risks framework to account for participants who died before reporting an event. Cause specific hazard ratios and subdistribution hazard ratios were estimated using cause specific Cox models and Fine-Grey models, respectively. Cumulative incidence plots were produced for visual presentation. Continuous secondary outcomes were summarised as means and standard deviations at each time point where appropriate, and adjusted mean differences were estimated using mixed effects repeated measures linear regression models. Binary secondary outcomes were summarised as rates and frequencies at each time point if appropriate. Adjusted risk ratios or adjusted risk differences were estimated using log binomial/identity binomial generalised linear mixed models or log binomial/identity binomial exchangeable generalised estimating equations. All models with repeated measures included the baseline score as the first time point. Sensitivity and supportive analyses of amputation free survival, overall survival, and time to major amputation included a per protocol analysis based only on participants who were adherent. Preplanned subgroup analyses of amputation free survival were completed for the minimisation variables, with the exception of the recruiting centre. The effects of these subgroups were examined by adding the subgroup by treatment interaction parameters to the regression model. Subgroup specific hazard ratios and the ratio of hazard ratios were estimated from the model coefficients. Multiple imputation was not used for any missing outcome data. Binary clinical outcome data were analysed in a time-to-event framework, censoring on last known follow-up. Other outcomes that were measured at several time points were analysed using repeated measures generalised linear mixed models with a compound symmetry covariance structure, which includes an implicit imputation of general missing data. All analyses were done in SAS (version 9.4) or Stata (version 18.0). The trial steering committee provided independent oversight. Interim analyses of effectiveness and safety endpoints were performed on behalf of the data monitoring committee on an approximately annual basis during recruitment, using the Haybittle-Peto principle, so that no adjustment was made to the final P values. The trial was registered (ISRCTN14469736). Patient and public involvement We have been supported throughout the trial by a designated patient and public involvement (PPI) group who were part of the trial steering committee. PPI members were consulted throughout the trial to improve our understanding of the needs of patients with chronic limb threatening ischaemia. PPI members commented on all patient facing material to ensure that they were clear and comprehensive. We organised collaboration days during the trial and the PPI representatives attended these meetings.
BASIL-3 was an open label, pragmatic, three arm, multicentre, superiority, phase 3, randomised controlled trial conducted at most (35) of the major UK NHS vascular units. Eligible participants were those who presented with chronic limb threatening ischaemia caused by atherosclerotic peripheral arterial disease and who, after a process of shared decision making, were offered and consented to a femoro-popliteal, with or without an infra-popliteal, endovascular revascularisation in preference to surgical revascularisation. As is standard UK practice, chronic limb threatening ischaemia was diagnosed on the basis of history and clinical examination, supported by selective use of haemodynamic testing and arterial imaging (one or more of duplex ultrasound, computed tomography angiography, magnetic resonance angiography, and digital subtraction angiography). To be randomised, a patient had to be assessed by a multidisciplinary team (including at least two vascular surgery or interventional radiology consultants), and deemed suitable for all three endovascular strategies. None of the patients randomised had undergone previous major (above the ankle) amputation of the trial leg. Patients presenting with intermittent claudication were not eligible for enrolment. None of the participants had a planned major (above ankle) amputation of the trial leg at the point of randomisation. Patients were excluded if they were expected to live for less than six months (pragmatic decision by randomising team) or had undergone an intervention to the target femoro-popliteal vessel within the past 12 months. Participants had to be able and willing to complete the health related quality of life and health economic questionnaires. Additionally, they needed to speak sufficient English (where translation facilities were insufficient) and to have capacity to provide written informed consent. The National Research Ethics Committee, North of Scotland, provided ethical approval on 26 August 2015 (15/NS/0070). Declaration of Helsinki and Good Clinical Practice guidelines were followed.
Participants were randomly assigned (1:1:1) using a secure online system to femoro-popliteal PBA±BMS, DCBA±BMS, or DES as their first revascularisation procedure. Minimisation was used to balance assignments according to age (≤60 years, 61-70 years, 71-80 years, >80 years), sex (male, female), diabetes mellitus (yes, no), chronic kidney disease (yes, no), severity of clinical disease (ischaemic rest or night pain only, tissue loss only, both), previous (permissible) intervention to the trial leg (yes, no), target artery (superficial femoral artery only, popliteal artery only, both), intention for a hybrid (endovascular with additional surgical) procedure (yes, no), and recruiting centre. Randomisation was provided centrally by the Birmingham Clinical Trials Unit, University of Birmingham, UK. Participants, study staff, and investigators were not masked to treatment allocation.
Vascular surgeons and interventional radiologists were encouraged to perform the allocated endovascular procedures using their preferred techniques and devices. Any UK licensed PBA, DCBA, BMS, or DES was permitted. The allocated intervention was to be performed within two weeks of randomisation where possible and clinically appropriate. All additional management strategies and procedures, including wound care and medical therapy, were at the discretion of the responsible clinicians and in the best interests of the patient. Participants were followed locally one month after the initial revascularisation; six, 12, and 24 months after randomisation; and then annually until the last participant had been followed for 24 months. Clinical (haemodynamic, medical treatments, clinical status) and health related quality of life data were collected during these visits. When face-to-face visits were not possible (particularly during covid-19), as much data as possible were obtained by telephone. In England and Wales, death and major amputation data were obtained until the end of follow-up from NHS Digital (the statutory custodian for health and social care data for England and Wales).
The primary outcome was amputation free survival, defined as the time to major (above ankle) amputation of the trial leg or death from any cause (whichever occurred first, time-to-event analysis). Clinical secondary outcomes included time to death from any cause (overall survival); time to major amputation of the trial leg; further major revascularisation of the trial leg; major adverse limb events (defined as major amputation of the trial leg or additional major revascularisation of the trial leg); major adverse cardiovascular events (defined as a new chronic limb threatening ischaemia affecting the non-trial leg, major amputation of the non-trial leg, myocardial infarction, stroke, or transient ischaemic attack); 30 day (after first intervention) mortality and morbidity; relief of ischaemic pain (assessed using the visual analogue scale, the Vascular Quality of Life Questionnaire tool and opiate usage); healing of tissue loss (assessed using the perfusion, extent, depth, infection, and sensation (PEDIS), and wound, ischaemic, and foot infection (WIFi) scoring systems); and changes in ankle brachial pressure index and/or toe brachial pressure index. Health related quality of life was assessed using generic (Euroqol 5D 5L, Short Form-12, ICEpop capability measure for older people, and Hospital Anxiety and Depression Scale) and disease specific (Vascular Quality of Life Questionnaire) tools. Serious adverse events were recorded up to 30 days after the first post-randomisation revascularisation. Participants were defined as adherent to the allocated trial procedure if the first revascularisation after randomisation was endovascular, the randomised class of device was used, and the randomised device was used in the superficial femoral artery, in the popliteal artery, or in both.
The original sample size was based on a time-to-event analysis making two key comparisons: PBA±BMS v DES, and PBA±BMS v DCBA±BMS. Recruitment was to take place over three years (20% year 1, 40% in years 2 and 3) with a minimum follow-up of two years in all participants. Based on the BASIL-1 trial, amputation free survival rates were assumed to be 0.70 in year 1, 0.64 in year 2, 0.52 in year 3, 0.46 in year 4, and 0.36 in year 5. Allowing for 5% attrition and the BASIL-1 survival estimates, 861 participants (having 342 primary outcome events) would provide 90% power to detect a reduction in amputation free survival of 40% (hazard ratio 0.60, equivalent to an absolute difference in amputation free survival of 13% at year 2, corresponding to one out of seven participants needing to benefit from DES or DCBA over PBA (number needed to treat) at the 2.5% significance level (to account for increased type I error risk associated with making two key comparisons) using the artsurv (version 1.0.7) programme in Stata (version 17.0). However, an error was identified in the implementation of the macro used to compute the original sample size, whereby a smaller effect size (than the targeted 40% relative reduction) had been applied for one of the two comparisons. This resulted in an overestimation of the sample size and number of primary outcome events needed. Using the same parameters as in the original calculation (listed above), the corrected sample size was 749 and the number of primary outcome events required was 291. Because the anticipated recruitment rates were not achieved, recruitment continued beyond year 3, and the median follow-up was longer than planned. Therefore, the number of randomised patients required to observe the target number of 291 events for 90% power was reduced. With support of the funder and independent oversight from the data monitoring committee, recruitment rates, length of follow-up, and pooled event rates over time were modelled to predict the number of participants needed to reach 291 events, with a minimum follow-up of two years. Modelling was updated approximately every six months based on emerging data. BASIL-3 closed to recruitment on 31 August 2021, with 481 participants randomised and 296 primary outcomes observed. A statistical analysis plan was specified before the analysis. All outcomes were analysed in the intention-to-treat population (all randomised participants irrespective of adherence). Differences between groups were presented with two sided 97.5% confidence intervals, adjusted for minimisation variables as fixed effects, and recruiting centre as a random effect (or as a shared frailty variable in the time-to-event analyses) when convergence was possible. P values are only presented for the primary outcome and were not corrected for multiple comparisons. Amputation free survival was analysed using a Cox proportional hazards model to generate an adjusted hazard ratio. Statistical significance of the treatment group parameter was determined through examination of the associated χ 2 statistic. Kaplan-Meier survival curves were constructed for visual presentation and absolute differences and number needed to treat values in failure probabilities (with 97.5% confidence intervals) were estimated between groups from these curves, computed at two years (as in the justification for the study sample size) and five years (the official final follow-up point). The confidence intervals for number needed to treat are presented in the notation of number needed to treat for benefit (NNTB) and for harm (NNTH) as introduced by Altman. A further prespecified analysis was conducted using a flexible parametric model with a time varying covariate for treatment to consider the effects of non-proportional hazards. The number of internal knots for the baseline hazard function and time varying covariate was determined from the model with the lowest Bayesian information criterion. A plot of hazard ratio over time (with corresponding 97.5% confidence intervals) was estimated. Overall survival was analysed as per the primary outcome (amputation free survival). Other secondary time-to-event outcomes (major amputations, major adverse limb events, major adverse cardiovascular events) were considered in a competing risks framework to account for participants who died before reporting an event. Cause specific hazard ratios and subdistribution hazard ratios were estimated using cause specific Cox models and Fine-Grey models, respectively. Cumulative incidence plots were produced for visual presentation. Continuous secondary outcomes were summarised as means and standard deviations at each time point where appropriate, and adjusted mean differences were estimated using mixed effects repeated measures linear regression models. Binary secondary outcomes were summarised as rates and frequencies at each time point if appropriate. Adjusted risk ratios or adjusted risk differences were estimated using log binomial/identity binomial generalised linear mixed models or log binomial/identity binomial exchangeable generalised estimating equations. All models with repeated measures included the baseline score as the first time point. Sensitivity and supportive analyses of amputation free survival, overall survival, and time to major amputation included a per protocol analysis based only on participants who were adherent. Preplanned subgroup analyses of amputation free survival were completed for the minimisation variables, with the exception of the recruiting centre. The effects of these subgroups were examined by adding the subgroup by treatment interaction parameters to the regression model. Subgroup specific hazard ratios and the ratio of hazard ratios were estimated from the model coefficients. Multiple imputation was not used for any missing outcome data. Binary clinical outcome data were analysed in a time-to-event framework, censoring on last known follow-up. Other outcomes that were measured at several time points were analysed using repeated measures generalised linear mixed models with a compound symmetry covariance structure, which includes an implicit imputation of general missing data. All analyses were done in SAS (version 9.4) or Stata (version 18.0). The trial steering committee provided independent oversight. Interim analyses of effectiveness and safety endpoints were performed on behalf of the data monitoring committee on an approximately annual basis during recruitment, using the Haybittle-Peto principle, so that no adjustment was made to the final P values. The trial was registered (ISRCTN14469736).
We have been supported throughout the trial by a designated patient and public involvement (PPI) group who were part of the trial steering committee. PPI members were consulted throughout the trial to improve our understanding of the needs of patients with chronic limb threatening ischaemia. PPI members commented on all patient facing material to ensure that they were clear and comprehensive. We organised collaboration days during the trial and the PPI representatives attended these meetings.
Between 29 January 2016 and 31 August 2021, 481 participants were randomised to PBA±BMS (n=160), DCBA±BMS (n=161), or DES (n=160; ). On 12 December 2018, recruitment was stopped by the trial management group, supported by the trial steering committee, because a meta-analysis had shown excess mortality in patients treated with a paclitaxel DCBA or DES. The UK Medicine and Healthcare products Regulatory Agency convened an expert advisory group, which concluded that paclitaxel devices could still be used in patients with chronic limb threatening ischaemia and recommended that BASIL-3 resume recruitment. With support from the funder and the trial steering committee, including PPI members, new ethical approval was obtained, and BASIL-3 reopened to recruitment on 16 September 2019 (table S1 gives more timeline details). One participant was randomised to DES without written informed consent and so was removed from all analyses. Of the 480 participants included in the analyses, 167 (35%) were women, and the mean age was 71.8 years (standard deviation 10.8; , ). A total of 464 (97%) participants received an endovascular procedure as their first revascularisation, with 444 (93%) of these being at least to the superficial femoral artery, the popliteal artery, or both. Three participants received a surgical revascularisation and 13 received no revascularisation. The allocated device was used in 142 (92%), 127 (82%), and 118 (76%) first endovascular interventions (in any femoro-popliteal artery) in the PBA±BMS, DCBA±BMS, and DES groups, respectively. This gave overall adherence rates of 140 (88%) for PBA±BMS, 122 (76%) for DCBA±BMS, and 118 (74%) for DES. A total of 426 (91%) participants received their first revascularisation within two weeks of randomisation and the median time to first intervention after randomisation was 0 days (interquartile range 0-3 days) in all three groups. Further details of the first revascularisation procedure including devices used and arterial segments treated can be found in the supplementary appendices (table S2). No patients were reported as having treatment using intravascular ultrasound as an adjunct, or with non-drug specialty balloons, or with atherectomy (such devices were not part of the protocol and were rarely used during the trial recruitment period in the UK). The median time to last clinical follow-up was 2.1 years (range 0-7.2 years) for all participants, and 3.1 years (range 0-7.2 years) in survivors. In the PBA±BMS group, 106/160 (66%) participants had a major amputation or died (no amputation free survival) compared with 97/161 (60%) in the DCBA±BMS group (adjusted hazard ratio from the Cox proportional hazards model 0.84; 97.5% confidence interval 0.61 to 1.16; P=0.22), and 93/159 (58%) in the DES group (0.83, 0.60 to 1.15; P=0.20; , ). The median amputation free survival time was 3.16, 3.52, and 4.29 years in the PBA±BMS, DCBA±BMS, and DES groups, respectively. The absolute differences in failure probabilities and corresponding numbers needed to treat were −0.042 (97.5% confidence interval −0.164 to 0.079), NNTB 24 (NNTB 6 to ∞ to NNTH 13) and −0.031 (−0.152 to 0.091), NNTB 32 (NNTB 7 to ∞ to NNTH 11) between DCBA±BMS and PBA±BMS, and between DES and PBA±BMS, respectively, at two years (table S5). The per protocol sensitivity analysis produced consistent results . Model assumption checks were performed to assess the non-proportional hazard assumption, which was found to be violated . Flexible parametric models were fitted and presents a plot of the hazard ratios over time with 97.5% confidence intervals. There was no evidence of varying effects in the prespecified subgroup analyses (tables S3 and S4). In the PBA±BMS group, 96/160 (60%) participants died from any cause compared with 90/161 (56%) in the DCBA±BMS group (adjusted hazard ratio 0.86, 97.5% confidence interval 0.62 to 1.20) and 80/159 (50%) in the DES group (0.79, 0.56 to 1.11). Table S8 presents causes of death. Model assumption checks were performed to assess the non-proportional hazard assumption, which was found to be violated (fig S1). Flexible parametric models were fitted and figure S1 provides a plot of the hazard ratio over time with 97.5% confidence intervals. In the PBA±BMS group, 23/160 (14%) participants had a major amputation compared with 18/161 (11%) participants in the DCBA±BMS group (adjusted cause specific hazard ratio 0.74, 97.5% confidence interval 0.36 to 1.50; subdistribution hazard ratio 0.76, 97.5% confidence interval 0.37 to 1.53), and 25/159 (16%) in the DES group (1.07, 0.56 to 2.05; 1.11, 0.58 to 2.11). For overall survival and time to major amputation, the per protocol sensitivity analyses produced consistent results . No differences were observed between the treatment groups relating to further interventions, 30 day morbidity and death, major adverse limb events, major adverse cardiovascular events, relief of ischaemic pain, or health related quality of life ( , , table S6, and figures S2-S5). In the PBA±BMS group, 16/160 (10%) participants had a serious adverse event compared with 9/161 (6%) in the DCBA±BMS group and 17/159 (11%) in the DES group. One serious adverse event was considered related to the trial intervention and was unexpected (hospital admission with epistaxis resolved with sphenopalatine artery ligation). Table S7 presents further serious adverse event details. Most causes of death were reported as multifactorial and often related to several comorbidities (table S8).
Principal findings The BASIL-3 trial showed that, in patients with chronic limb threatening ischaemia undergoing an endovascular femoro-popliteal with or without infra-popliteal revascularisation to restore limb perfusion, neither DCBA±BMS nor primary DES, when used in the femoro-popliteal segment, significantly improved amputation free survival when compared with PBA±BMS. The best estimates of the hazard ratios for amputation free survival were 0.84 for DCBA±BMS and 0.83 for DES, which is equivalent to a NNTB of 24 for DCBA±BMS and a NNTB of 32 for DES at the two year time point. The 97.5% confidence intervals for the hazard ratios ranged from 0.61 to 1.16 for DCBA±BMS and 0.60 to 1.15 for DES, which include values representing potential benefit and harm. The 97.5% confidence intervals narrowly excluded (or bordered) the 40% relative reduction (13% absolute difference at two years) in the rate of major amputation or death, which was set as the target difference a priori. Therefore, the BASIL-3 trial does not support the hypothesis that the use of DCBA±BMS, or DES, in the femoro-popliteal segment for revascularisation in patients with chronic limb threatening ischaemia confers important clinical benefit over PBA±BMS in terms of amputation free survival (the primary outcome) or a wide range of prespecified secondary clinical and patient reported outcomes. We cannot conclude that DCBA±BMS, or DES, do not potentially offer smaller clinical benefits over PBA±BMS, which some clinicians might consider meaningful, and BASIL-3 was not powered to detect. Strengths and limitations BASIL-3 is a publicly funded randomised controlled trial evaluating DCBA±BMS and DES separately in patients with chronic limb threatening ischaemia. The number of primary endpoints required to attain at least 90% power to detect the prespecified target differences was exceeded. Follow-up was longer and more complete than anticipated. Cause of death data were available for all patients. BASIL-3 contains a within trial health economic analysis that will be reported separately. The results of the intention-to-treat and per protocol analyses were consistent, which highlights the robustness of the trial findings. BASIL-3 is a pragmatic, real world trial that involved most (35) of the major UK vascular units and so reflects standard of care across the NHS. The results are likely to be applicable to other countries with similar chronic limb threatening ischaemia populations and healthcare systems. Because of concerns about the safety of paclitaxel, BASIL-3 recruitment was paused between December 2018 and September 2019. Although we have no evidence to suggest that this was the case, we cannot exclude the possibility that patients recruited before and after the pause were different. When covid-19 arrived in the UK in March 2020, recruitment and follow-up became increasingly difficult. However, we have no evidence to suggest that covid altered the outcome of the trial in terms of the differences observed between the three arms. Covid was recorded as the cause of death for only 12 patients. In keeping with almost all other studies, BASIL-3 treats DCB and DES as classes of devices. Although it has been suggested that there might be clinically important differences between different types (brands) of DCB and DES, comparative data are virtually non-existent in chronic limb threatening ischaemia. The use of newer devices that became available during the BASIL-3 recruitment period was low. In terms of bailout stenting after PBA or DCB, biomimetic stenting was used in 13 patients in total from both cohorts. Similarly, the use of non-paclitaxel DCB (one patient, sirolimius) and non-paclitaxel DES (eight patients, everolimus) was exceedingly low. It is unknown whether these technologies are superior to standard BMS or paclitaxel based devices and the numbers of such in BASIL-3 are too small to make any meaningful comparisons. Details about vessel preparation were not collected or part of the protocol (such as atherectomy, intravascular ultrasound, intravascular lithotripsy etc). The use of such devices in the UK was low during the BASIL-3 recruitment period (most were recruited before 2020). It is unclear what additional benefit, if any, such devices provide in the absence of high quality, publicly funded evidence. The non-adherence rate, especially in the drug eluting arms, was slightly higher than expected and the reasons for this are not completely clear. However, there were some instances where endovascular treatment was not possible. BASIL-3 was designed to be a pragmatic trial that is a true reflection of real world clinical practice. In the trial, 464/480 participants received an endovascular procedure as their first intervention. This was mostly driven by different devices being used at the discretion of the treating physician. Patients with a previous intervention (in the preceding 12 months) were excluded because it was felt that this would increase the risk of treatment failure with subsequent intervention. Also, given previous failed intervention, it was felt that clinicians might not have equipoise to try PBA again and would be more likely to use drug eluting technology or consider a different approach, which could have been a major barrier to reaching clinical equipoise. Comparison with other studies Before BASIL-3, there was quantitatively and qualitatively limited and conflicting evidence regarding the clinical, and even more so the cost effectiveness of DCBA or DES in patients presenting with chronic limb threatening ischaemia. Unlike BASIL-3, many studies analysed DCBA and DES together, even though they are very different technologies with their own advantages and disadvantages in different clinical and anatomic scenarios. As a result, there is considerable ongoing debate and controversy around the use of these devices in patients with chronic limb threatening ischaemia, and large variations remain in practice within and between countries. In the UK, in 2012, the lack of evidence of clinical and cost effectiveness led NICE not to recommend DCBA or DES for the treatment of chronic limb threatening ischaemia. However, BASIL-3 was funded by NIHR HTA as a direct result of a NICE research recommendation and the trial results will be available to NICE when it comes to review its UK national guidelines on the management of chronic limb threatening ischaemia. BASIL-3 is likely to help inform vascular practice in other countries with similar chronic limb threatening ischaemia populations and healthcare systems. BASIL-3 comprised an anatomically heterogeneous cohort typical of patients with chronic limb threatening ischaemia. We are collecting prerandomisation imaging to explore whether there are anatomic subgroups within the BASIL-3 cohort where DCBA or DES might confer clinical benefit. BASIL-3 did not include standardised follow-up imaging. At present, we do not have data on anatomic endpoints such as restenosis. However, a more detailed analysis of the nature and timing of further repeat and crossover interventions will be the subject of a further report. The BASIL-3 within-trial health economic analysis will be reported separately in due course. Policy implications In the BASIL-3 trial the use of DCBA±BMS and DES in participants with chronic limb threatening ischaemia secondary to femoro-popliteal with or without infra-popliteal disease did not confer clinical benefit over the use of PBA±BMS. Therefore, in this patient group, BASIL-3 does not support a role for these technologies in terms of clinical effectiveness, based on the effect size that the trial was powered to determine. We cannot exclude smaller absolute differences in the primary outcome, but it is unclear if any smaller absolute difference would be clinically meaningful. A separate health economics report will be published in due course to assess the cost effectiveness and cost utility analyses of these devices. Conclusions In the BASIL-3 trial, the use of DCBA±BMS and DES did not confer important clinical benefit over PBA±BMS in the femoro-popliteal segment in patients with chronic limb threatening ischaemia undergoing endovascular femoro-popliteal, with or without infra-popliteal, revascularisation. What is already known on this topic In 2012, the UK National Institute for Health and Care Excellence (NICE) established that a randomised trial was needed to compare the clinical effectiveness of endovascular strategies in patients undergoing primary revascularisation for chronic limb threatening ischaemia These strategies include femoro-popliteal plain balloon angioplasty with or without bare metal stenting, drug coated balloon angioplasty with or without bare metal stenting, and drug eluting stenting NICE recommended that such a trial be performed, and this led the UK National Institute for Health and Care Research, Health Technology Assessment (NIHR HTA) programme to fund the BASIL-3 trial reported here What this study adds Recent systematic reviews and meta-analyses have confirmed BASIL-3 is the only publicly funded randomised controlled trial to compare the clinical effectiveness of these three endovascular strategies in patients undergoing revascularisation for chronic limb threatening ischaemia In the BASIL-3 trial, the use of drug coated balloons and drug eluting stents in the femoro-popliteal segment did not confer significant clinical benefit over the use of plain balloons and bare metal stents BASIL-3 does not support a role for drug coated balloons or drug eluting stents in the femoro-popliteal segment in the management of patients with chronic limb threatening ischaemia undergoing endovascular revascularisation
The BASIL-3 trial showed that, in patients with chronic limb threatening ischaemia undergoing an endovascular femoro-popliteal with or without infra-popliteal revascularisation to restore limb perfusion, neither DCBA±BMS nor primary DES, when used in the femoro-popliteal segment, significantly improved amputation free survival when compared with PBA±BMS. The best estimates of the hazard ratios for amputation free survival were 0.84 for DCBA±BMS and 0.83 for DES, which is equivalent to a NNTB of 24 for DCBA±BMS and a NNTB of 32 for DES at the two year time point. The 97.5% confidence intervals for the hazard ratios ranged from 0.61 to 1.16 for DCBA±BMS and 0.60 to 1.15 for DES, which include values representing potential benefit and harm. The 97.5% confidence intervals narrowly excluded (or bordered) the 40% relative reduction (13% absolute difference at two years) in the rate of major amputation or death, which was set as the target difference a priori. Therefore, the BASIL-3 trial does not support the hypothesis that the use of DCBA±BMS, or DES, in the femoro-popliteal segment for revascularisation in patients with chronic limb threatening ischaemia confers important clinical benefit over PBA±BMS in terms of amputation free survival (the primary outcome) or a wide range of prespecified secondary clinical and patient reported outcomes. We cannot conclude that DCBA±BMS, or DES, do not potentially offer smaller clinical benefits over PBA±BMS, which some clinicians might consider meaningful, and BASIL-3 was not powered to detect.
BASIL-3 is a publicly funded randomised controlled trial evaluating DCBA±BMS and DES separately in patients with chronic limb threatening ischaemia. The number of primary endpoints required to attain at least 90% power to detect the prespecified target differences was exceeded. Follow-up was longer and more complete than anticipated. Cause of death data were available for all patients. BASIL-3 contains a within trial health economic analysis that will be reported separately. The results of the intention-to-treat and per protocol analyses were consistent, which highlights the robustness of the trial findings. BASIL-3 is a pragmatic, real world trial that involved most (35) of the major UK vascular units and so reflects standard of care across the NHS. The results are likely to be applicable to other countries with similar chronic limb threatening ischaemia populations and healthcare systems. Because of concerns about the safety of paclitaxel, BASIL-3 recruitment was paused between December 2018 and September 2019. Although we have no evidence to suggest that this was the case, we cannot exclude the possibility that patients recruited before and after the pause were different. When covid-19 arrived in the UK in March 2020, recruitment and follow-up became increasingly difficult. However, we have no evidence to suggest that covid altered the outcome of the trial in terms of the differences observed between the three arms. Covid was recorded as the cause of death for only 12 patients. In keeping with almost all other studies, BASIL-3 treats DCB and DES as classes of devices. Although it has been suggested that there might be clinically important differences between different types (brands) of DCB and DES, comparative data are virtually non-existent in chronic limb threatening ischaemia. The use of newer devices that became available during the BASIL-3 recruitment period was low. In terms of bailout stenting after PBA or DCB, biomimetic stenting was used in 13 patients in total from both cohorts. Similarly, the use of non-paclitaxel DCB (one patient, sirolimius) and non-paclitaxel DES (eight patients, everolimus) was exceedingly low. It is unknown whether these technologies are superior to standard BMS or paclitaxel based devices and the numbers of such in BASIL-3 are too small to make any meaningful comparisons. Details about vessel preparation were not collected or part of the protocol (such as atherectomy, intravascular ultrasound, intravascular lithotripsy etc). The use of such devices in the UK was low during the BASIL-3 recruitment period (most were recruited before 2020). It is unclear what additional benefit, if any, such devices provide in the absence of high quality, publicly funded evidence. The non-adherence rate, especially in the drug eluting arms, was slightly higher than expected and the reasons for this are not completely clear. However, there were some instances where endovascular treatment was not possible. BASIL-3 was designed to be a pragmatic trial that is a true reflection of real world clinical practice. In the trial, 464/480 participants received an endovascular procedure as their first intervention. This was mostly driven by different devices being used at the discretion of the treating physician. Patients with a previous intervention (in the preceding 12 months) were excluded because it was felt that this would increase the risk of treatment failure with subsequent intervention. Also, given previous failed intervention, it was felt that clinicians might not have equipoise to try PBA again and would be more likely to use drug eluting technology or consider a different approach, which could have been a major barrier to reaching clinical equipoise.
Before BASIL-3, there was quantitatively and qualitatively limited and conflicting evidence regarding the clinical, and even more so the cost effectiveness of DCBA or DES in patients presenting with chronic limb threatening ischaemia. Unlike BASIL-3, many studies analysed DCBA and DES together, even though they are very different technologies with their own advantages and disadvantages in different clinical and anatomic scenarios. As a result, there is considerable ongoing debate and controversy around the use of these devices in patients with chronic limb threatening ischaemia, and large variations remain in practice within and between countries. In the UK, in 2012, the lack of evidence of clinical and cost effectiveness led NICE not to recommend DCBA or DES for the treatment of chronic limb threatening ischaemia. However, BASIL-3 was funded by NIHR HTA as a direct result of a NICE research recommendation and the trial results will be available to NICE when it comes to review its UK national guidelines on the management of chronic limb threatening ischaemia. BASIL-3 is likely to help inform vascular practice in other countries with similar chronic limb threatening ischaemia populations and healthcare systems. BASIL-3 comprised an anatomically heterogeneous cohort typical of patients with chronic limb threatening ischaemia. We are collecting prerandomisation imaging to explore whether there are anatomic subgroups within the BASIL-3 cohort where DCBA or DES might confer clinical benefit. BASIL-3 did not include standardised follow-up imaging. At present, we do not have data on anatomic endpoints such as restenosis. However, a more detailed analysis of the nature and timing of further repeat and crossover interventions will be the subject of a further report. The BASIL-3 within-trial health economic analysis will be reported separately in due course.
In the BASIL-3 trial the use of DCBA±BMS and DES in participants with chronic limb threatening ischaemia secondary to femoro-popliteal with or without infra-popliteal disease did not confer clinical benefit over the use of PBA±BMS. Therefore, in this patient group, BASIL-3 does not support a role for these technologies in terms of clinical effectiveness, based on the effect size that the trial was powered to determine. We cannot exclude smaller absolute differences in the primary outcome, but it is unclear if any smaller absolute difference would be clinically meaningful. A separate health economics report will be published in due course to assess the cost effectiveness and cost utility analyses of these devices.
In the BASIL-3 trial, the use of DCBA±BMS and DES did not confer important clinical benefit over PBA±BMS in the femoro-popliteal segment in patients with chronic limb threatening ischaemia undergoing endovascular femoro-popliteal, with or without infra-popliteal, revascularisation. What is already known on this topic In 2012, the UK National Institute for Health and Care Excellence (NICE) established that a randomised trial was needed to compare the clinical effectiveness of endovascular strategies in patients undergoing primary revascularisation for chronic limb threatening ischaemia These strategies include femoro-popliteal plain balloon angioplasty with or without bare metal stenting, drug coated balloon angioplasty with or without bare metal stenting, and drug eluting stenting NICE recommended that such a trial be performed, and this led the UK National Institute for Health and Care Research, Health Technology Assessment (NIHR HTA) programme to fund the BASIL-3 trial reported here What this study adds Recent systematic reviews and meta-analyses have confirmed BASIL-3 is the only publicly funded randomised controlled trial to compare the clinical effectiveness of these three endovascular strategies in patients undergoing revascularisation for chronic limb threatening ischaemia In the BASIL-3 trial, the use of drug coated balloons and drug eluting stents in the femoro-popliteal segment did not confer significant clinical benefit over the use of plain balloons and bare metal stents BASIL-3 does not support a role for drug coated balloons or drug eluting stents in the femoro-popliteal segment in the management of patients with chronic limb threatening ischaemia undergoing endovascular revascularisation
In 2012, the UK National Institute for Health and Care Excellence (NICE) established that a randomised trial was needed to compare the clinical effectiveness of endovascular strategies in patients undergoing primary revascularisation for chronic limb threatening ischaemia These strategies include femoro-popliteal plain balloon angioplasty with or without bare metal stenting, drug coated balloon angioplasty with or without bare metal stenting, and drug eluting stenting NICE recommended that such a trial be performed, and this led the UK National Institute for Health and Care Research, Health Technology Assessment (NIHR HTA) programme to fund the BASIL-3 trial reported here
Recent systematic reviews and meta-analyses have confirmed BASIL-3 is the only publicly funded randomised controlled trial to compare the clinical effectiveness of these three endovascular strategies in patients undergoing revascularisation for chronic limb threatening ischaemia In the BASIL-3 trial, the use of drug coated balloons and drug eluting stents in the femoro-popliteal segment did not confer significant clinical benefit over the use of plain balloons and bare metal stents BASIL-3 does not support a role for drug coated balloons or drug eluting stents in the femoro-popliteal segment in the management of patients with chronic limb threatening ischaemia undergoing endovascular revascularisation
|
Does timely reporting of preoperative CT scans influence outcomes for patients following emergency laparotomy? | e0822851-9334-40ba-8346-9a2cece8958c | 11785439 | Surgical Procedures, Operative[mh] | Computed tomography (CT) has an essential role in diagnosing surgical pathology and devising appropriate management plans (both operative and nonoperative). Early CT scanning after admission to hospital is increasingly used before surgery for acute surgical abdominal pathologies. Rapid diagnosis followed by intervention is likely to have an impact on patient outcomes. In the United Kingdom (UK), the use of CT imaging in the preoperative period is suggested as being associated with decreased mortality for high-risk surgical patients, and is a minimum standard in the emergency laparotomy patient pathway. The best practice guidelines used to define the clinical standards for CT scan timings were published within the ‘NHS Services, Seven Days a Week Forum’ and the Royal College of Surgeons of England document ‘Emergency Surgery Guidance for Providers, Commissioners and Service Planners; Standards for Unscheduled Surgical Care’. , These guidelines state that a CT scan should be reported within 1h of request for ‘critical’ patients (when the test will alter their management at the time) and within 12h for ‘urgent’ patients (if the test will alter their management but not necessarily that day). It has been reported that there is a higher risk of mortality with delays in operative intervention and source control for gastrointestinal perforation. , Some of the delay in emergency surgery may be attributed to delays in preoperative diagnostic imaging. A study from the USA reported that delays in preoperative CT scanning can have adverse outcomes in the elderly population and higher complication rates. Although early preoperative imaging is an audit standard according to the National Emergency Laparotomy Audit (NELA), a study of the association between the time taken for imaging and patient outcomes for those undergoing emergency laparotomy has not yet been undertaken in the UK. The aim of the current study was to investigate the relationship between timing of CT (from request to report) and outcomes for patients who require emergency laparotomy, to determine whether this standard is justified for this patient group. We hypothesised that patients with delayed CT imaging may have worse outcomes than those who underwent CT scanning within the audit standards.
Study design and setting An observational study was undertaken to examine the relationship between adherence to best practice guidance on CT scan timings (from request to report) and patient mortality. This was done using a database extracted from a single site’s NELA records between January 2014 and December 2021. There were no changes in the number or availability of CT scanners or emergency operating theatres during the study period. Institutional approval was granted before data collection. CT timing guidance For the current study, the category of scan was taken directly from the NELA records (the study investigators did not make their own judgements about the scan categories). These were categorised into ‘critical’ or ‘urgent’. Scans were considered to meet the guideline if they were reported for these categories within 1 and 12h of the scan request, respectively. Because the category of scans was taken from the NELA records, this was not declared at the time of the CT request, and the categorisation is not available on the CT requesting system at the NHS trust. Urgency of scanning at this trust is usually relayed to the radiologist on-call via electronic request and telephone call. Data collected All data were taken directly from the NELA records for patients. These included demographic details (age, gender), physiological status, American Society of Anesthesiology (ASA) score and NELA mortality risk prediction, malignancy status, timing and category of scans (critical or urgent), timing of admission (day or night, defined as 8am to 8pm and 8pm to 8am, respectively), and adherence to timings guidance. Timings were calculated by measuring the period between the CT scan being requested and reported. The primary outcome of interest was 30-day mortality, which was taken from the NELA records. Statistical analysis Continuous data are summarised as median and interquartile range (IQR), and categorical data are summarised as number . Simple pairwise univariable analyses were undertaken using the Mann–Whitney U test for continuous variables and chi-squared analysis for categorical variables. Separate univariable and multivariable logistic regression models were used to determine the odds ratio (OR) and 95% confidence intervals (95% CI) for both adherence to the CT timings standard and 30-day mortality. All models were undertaken using independent variables that were selected a priori because of their credible influence on the dependent variables of interest, and included age, sex, nighttime admission, NELA mortality prediction and ‘critical’ status. A value of p < 0.05 was considered statistically significant. Analyses were undertaken using GraphPad Prism (version 9.4; GraphPad Software) and R (version 1.4; R Foundation for Statistical Computing, Vienna, Austria).
An observational study was undertaken to examine the relationship between adherence to best practice guidance on CT scan timings (from request to report) and patient mortality. This was done using a database extracted from a single site’s NELA records between January 2014 and December 2021. There were no changes in the number or availability of CT scanners or emergency operating theatres during the study period. Institutional approval was granted before data collection.
For the current study, the category of scan was taken directly from the NELA records (the study investigators did not make their own judgements about the scan categories). These were categorised into ‘critical’ or ‘urgent’. Scans were considered to meet the guideline if they were reported for these categories within 1 and 12h of the scan request, respectively. Because the category of scans was taken from the NELA records, this was not declared at the time of the CT request, and the categorisation is not available on the CT requesting system at the NHS trust. Urgency of scanning at this trust is usually relayed to the radiologist on-call via electronic request and telephone call.
All data were taken directly from the NELA records for patients. These included demographic details (age, gender), physiological status, American Society of Anesthesiology (ASA) score and NELA mortality risk prediction, malignancy status, timing and category of scans (critical or urgent), timing of admission (day or night, defined as 8am to 8pm and 8pm to 8am, respectively), and adherence to timings guidance. Timings were calculated by measuring the period between the CT scan being requested and reported. The primary outcome of interest was 30-day mortality, which was taken from the NELA records.
Continuous data are summarised as median and interquartile range (IQR), and categorical data are summarised as number . Simple pairwise univariable analyses were undertaken using the Mann–Whitney U test for continuous variables and chi-squared analysis for categorical variables. Separate univariable and multivariable logistic regression models were used to determine the odds ratio (OR) and 95% confidence intervals (95% CI) for both adherence to the CT timings standard and 30-day mortality. All models were undertaken using independent variables that were selected a priori because of their credible influence on the dependent variables of interest, and included age, sex, nighttime admission, NELA mortality prediction and ‘critical’ status. A value of p < 0.05 was considered statistically significant. Analyses were undertaken using GraphPad Prism (version 9.4; GraphPad Software) and R (version 1.4; R Foundation for Statistical Computing, Vienna, Austria).
Study patient characteristics There were 1,299 patients with a median age of 66 (IQR 52–76) years; 626/1,299 (48%) were male. Patient characteristics are summarised in and compared between those who died within or survived for 30 days. On pairwise univariable analysis, patients who died within 30 days were more likely to be older, have a higher ASA score, greater NELA mortality risk, and were more likely to require a critical scan than those who survived . Those who survived were more likely to have had a CT scan that adhered to the timings standard than those who died. CT timings There were 622/1,299 (48%) critical and 677/1,299 (52%) urgent CT scans during the study period. A CT scan that was classified as critical was associated with higher NELA mortality risk compared with a CT scan that was classified urgent (11% [IQR 4–38%] vs 4% [IQR 2–10%] respectively; p < 0.001) . Only 360/1,299 (28%) of scans were undertaken within the timings standard, including 359 of 677 (53%) urgent and 1 of 622 (0.2%) critical scans ( p < 0.001). The median time between request and report was 8h 15min (IQR 3h 48min to 16h 38min) for critical scans and 11h 2min (IQR 5h 12min to 20h 48min) for urgent scans. There was no significant trend in timings over the study period . On univariable logistic regression analysis, patients were less likely to have a scan within the timings standard if they had higher ASA scores, greater NELA risk of mortality and a scan that was classified as critical . When a multivariable logistic regression model incorporated age, gender, nighttime admission, ASA, NELA mortality risk and category of scan, patients remained less likely to have a scan within the timings standard if it was classified as critical . They also appeared to be more likely to adhere to guidance during the night . Thirty-day mortality Some 142/1,299 (11%) patients died within 30 days. There was no significant trend in mortality over time . Mortality at 30 days was associated with poorer adherence to CT timings standard . However, for the 677 patients in the subgroup who had urgent scans, 30-day mortality was 27/359 (8%) for those who met the standard and 19/318 (6%) for those who did not ( p = 0.425). On univariable logistic regression analysis, patients were more likely to die within 30 days if they were older, had a higher ASA score and greater NELA risk of mortality . When a multivariable logistic regression model was used to incorporate age, gender, nighttime admission, ASA, NELA mortality risk, malignant status and adherence to timings standard, adherence to timings was no longer a statistically significant independent variable; instead, only age, ASA and NELA mortality risk remained statistically significant .
There were 1,299 patients with a median age of 66 (IQR 52–76) years; 626/1,299 (48%) were male. Patient characteristics are summarised in and compared between those who died within or survived for 30 days. On pairwise univariable analysis, patients who died within 30 days were more likely to be older, have a higher ASA score, greater NELA mortality risk, and were more likely to require a critical scan than those who survived . Those who survived were more likely to have had a CT scan that adhered to the timings standard than those who died.
There were 622/1,299 (48%) critical and 677/1,299 (52%) urgent CT scans during the study period. A CT scan that was classified as critical was associated with higher NELA mortality risk compared with a CT scan that was classified urgent (11% [IQR 4–38%] vs 4% [IQR 2–10%] respectively; p < 0.001) . Only 360/1,299 (28%) of scans were undertaken within the timings standard, including 359 of 677 (53%) urgent and 1 of 622 (0.2%) critical scans ( p < 0.001). The median time between request and report was 8h 15min (IQR 3h 48min to 16h 38min) for critical scans and 11h 2min (IQR 5h 12min to 20h 48min) for urgent scans. There was no significant trend in timings over the study period . On univariable logistic regression analysis, patients were less likely to have a scan within the timings standard if they had higher ASA scores, greater NELA risk of mortality and a scan that was classified as critical . When a multivariable logistic regression model incorporated age, gender, nighttime admission, ASA, NELA mortality risk and category of scan, patients remained less likely to have a scan within the timings standard if it was classified as critical . They also appeared to be more likely to adhere to guidance during the night .
Some 142/1,299 (11%) patients died within 30 days. There was no significant trend in mortality over time . Mortality at 30 days was associated with poorer adherence to CT timings standard . However, for the 677 patients in the subgroup who had urgent scans, 30-day mortality was 27/359 (8%) for those who met the standard and 19/318 (6%) for those who did not ( p = 0.425). On univariable logistic regression analysis, patients were more likely to die within 30 days if they were older, had a higher ASA score and greater NELA risk of mortality . When a multivariable logistic regression model was used to incorporate age, gender, nighttime admission, ASA, NELA mortality risk, malignant status and adherence to timings standard, adherence to timings was no longer a statistically significant independent variable; instead, only age, ASA and NELA mortality risk remained statistically significant .
The main finding from the current study is that adherence to the best practice standard for the timing of a CT scan from request to report appears at first to be associated with better 30-day survival, but this finding is no longer significant in the adjusted models. This appears to be a result of scans being less likely to adhere to guidance for patients who were at higher risk (higher NELA mortality risk, higher ASA scores and more likely to be categorised as ‘critical’). Sicker patients appeared to be less likely to have a CT scan that adhered to guidance, and also more likely to die within 30 days. Even though the data demonstrate that patients receiving CT scans more quickly were less likely to die within 30 days, it appears that it is not the only affecting factor. CT scans form the mainstay mode of investigating patients presenting with an acute abdomen and can support decision making in patients who might require laparotomy. The fourth NELA report was the first to recommend that patients who require immediate surgical management should not be delayed by waiting for a CT scan. Other authors have reported improved outcomes with early CT scans for patients undergoing emergency laparotomy for trauma. There is some evidence that earlier CT scans for better diagnostic accuracy and decision making reduce time for operative intervention. , Gil-Sun Hong et al report improvement in intensive care unit admissions with a dedicated radiology team for emergency surgery, which was associated with faster scans and earlier operative management. However, the current study demonstrates that the relationship between timings and outcomes for emergency non-trauma laparotomy is more complex and is influenced by other patient factors. In the trauma setting, there is similar evidence that the timing of CT scans is not as significant a factor as other patient factors such as age and injury severity, and that patients who undergo scanning within the guidelines are not necessarily matched with those who do not. Some investigators have looked at the physical distance of the CT scanner from the trauma room and found that reduced CT scan timing and improved survival outcomes were noted in groups in which the CT scan was located within 50m. Other studies report improved outcomes and time benefits in patients with both penetrating and blunt injuries who had prompt preoperative CT scans , , enabling timely management. The current study suggests patients who are more unwell are more likely to miss the 60min window standard for reporting a ‘critical’ CT scan that has been recommended by NELA. Indeed, the fourth NELA report advocates a 60min window for all scans for laparotomy patients (regardless of category). If this were applied then there may be even poorer compliance, with an unknown effect on patient outcomes. The current study illustrates one problem with imposing audit standards on practice with unknown or unproven influence on outcomes. Further investigations are required to determine which of the individual NELA audit standards are likely to improve patient outcomes and which might just reflect the overall clinical picture and be less helpful. An important disadvantage of the NELA categorisation of ‘urgent’ and ‘critical’ scans is that these categories are not universally declared within the CT requesting system or in the logistics of moving patients to and from the CT scanner. Instead, they are input into the online audit form, usually after the event. Therefore, it may not always be clear which timing standard applies at the time of the request. From a practical standpoint, surgical patients requiring early operative intervention will continue to undergo CT imaging as early as possible to increase preoperative diagnostic accuracy. Although this study demonstrates that most scans were not reported within the standard time, this did not seem to impact this pathway or the outcome. These high-risk patients will have increased perioperative mortality risks caused by the disease process itself and associated patient factors. Study limitations This study is observational and retrospective, with all the usual limitations of this design, such as selection bias, missing data and transcription errors. We were unable to determine whether there were any significant changes to the number or availability of radiologists or the availability of remote access during the study period. All clinical data for the study (such as ‘urgent’ vs ‘critical’) were taken verbatim from clinical records without any interpretation by the authors, and it is not known whether these categories were applied consistently between individuals. We were unable to determine the exact reason for laparotomy. Because of the retrospective design, we were not able to determine whether scans were performed within the timing standard but not yet reported. It is possible that such scans may have been seen by the surgical team and acted upon prior to a formal radiology report. We were also unable to determine whether the scan reports were accurate or whether they changed the management of the patients within the study cohort. Further investigations may determine whether there is a relationship between timings, accuracy of diagnosis and patient outcomes.
This study is observational and retrospective, with all the usual limitations of this design, such as selection bias, missing data and transcription errors. We were unable to determine whether there were any significant changes to the number or availability of radiologists or the availability of remote access during the study period. All clinical data for the study (such as ‘urgent’ vs ‘critical’) were taken verbatim from clinical records without any interpretation by the authors, and it is not known whether these categories were applied consistently between individuals. We were unable to determine the exact reason for laparotomy. Because of the retrospective design, we were not able to determine whether scans were performed within the timing standard but not yet reported. It is possible that such scans may have been seen by the surgical team and acted upon prior to a formal radiology report. We were also unable to determine whether the scan reports were accurate or whether they changed the management of the patients within the study cohort. Further investigations may determine whether there is a relationship between timings, accuracy of diagnosis and patient outcomes.
In our study of 1,299 patients who required CT imaging before emergency laparotomy, guidelines for reporting were adhered to for only a minority of patients. However, there was no clear association between adherence to preoperative CT reporting guidelines and 30-day mortality. This may be because patients who were sicker were less likely to meet the timing standards, and also less likely to survive. This illustrates a selection bias when assessing patient outcomes according to guideline adherence in emergency surgery; adherence to audit standards may initially appear to be associated with improved outcomes but it is important to match patients for their disease and demographic characteristics when assessing these standards.
|
Genetic load in incomplete lupus erythematosus | 427f4d42-920f-46b7-87e8-6c03d73d10a3 | 9815005 | Internal Medicine[mh] | Patients with incomplete lupus erythematosus (ILE) exhibit some features of SLE but not enough for SLE classification. Although some patients with ILE progress to SLE classification, most maintain a mild disease course with limited major organ involvement; however, the factors limiting disease severity in a subset of patients with ILE are unknown. In SLE, increases in the genetic load of SLE risk alleles are associated with SLE susceptibility and severity; therefore, we determined whether the genetic load of SLE-associated risk alleles was reduced in patients with ILE compared with patients with SLE, limiting disease severity.
We found that patients with ILE exhibited similar SLE risk allele genetic loads as patients with SLE, and genetic load did not affect the odds of having SLE compared with ILE. Therefore, patients with SLE and ILE exhibit a similar genetic predisposition, and SLE risk allele genetic load cannot differentiate subjects with ILE.
This study found that patients with ILE and SLE have similar SLE risk allele genetic load, suggesting that a reduction in genetic susceptibility does not limit SLE transition in some patients with ILE. Therefore, disease severity may be influenced by genetic variants specific to ILE and/or gene–environment interactions. Determining the factors that limit ILE disease severity may help with risk stratification and preventative treatment.
SLE is a complex chronic autoimmune disease with various systemic manifestations. SLE is typically diagnosed based on characteristic clinical and serological features defined by the American College of Rheumatology (ACR) or Systemic Lupus International Collaborating Clinics (SLICC). However, a subset of patients, referred to as incomplete lupus erythematosus (ILE), exhibit some clinical symptoms or serological evidence of SLE but do not fulfil classification criteria. Approximately 20% of patients with ILE transition to classified SLE within 5 years of onset, but most experience a relatively mild disease course with no symptomatic progression and limited involvement of major organs. The factors that limit disease severity in ILE are unknown. Genome-wide association studies have identified over 100 genes associated with SLE classification, including variants associated with specific disease manifestations, such as nephritis. Increases in the number of these SLE risk alleles, termed genetic load, are associated with SLE susceptibility. Furthermore, increased genetic load correlates with more severe disease, organ damage, renal dysfunction and mortality. Therefore, we hypothesise that ILE may share genetic associations with SLE but with a reduced genetic load. However, the genetic risk of ILE has not been studied. In this study, we determined the cumulative burden of SLE variants on ILE susceptibility by comparing the genetic load of SLE risk alleles in European American patients with ILE, patients with SLE and healthy controls.
Study population European American patients with SLE (n=170) or ILE (n=169) and healthy controls with no self-reported lupus manifestations (n=133) were selected from existing collections in the Arthritis & Clinical Immunology Biorepository (CAP# 9418302) at the Oklahoma Medical Research Foundation. Demographic information was self-reported. Participants with SLE or ILE were characterised by a systematised medical records review for SLE classification criteria. ILE was defined as three ACR 1997 criteria and SLE as four or more ACR-1997 criteria. Patients with ILE by ACR who also did not meet SLICC classification criteria were considered ILE SLICC . All individuals with ILE were previously enrolled in the Lupus Family Registry and Repository (LFRR) (1995–2012). Healthy controls with no documented lupus manifestations were also previously enrolled in the LFRR or from the Oklahoma Immune Cohort through the Oklahoma Rheumatic Disease Research Cores Centre collections. Genotyping, quality control and imputation Samples were genotyped on the Infinium Global Screening Array-24 V.2.0 (Illumina, San Diego, California, USA), with 665 608 variants genotyped per sample. With consulting support from Rancho BioSciences (San Diego, California, usa), quality control was performed at the sample and variant level in PLINK V.2.0 (V.1.90) . Samples with call rates below 90%, extreme heterozygosity measured by Wrights inbreeding coefficient (F<−0.05||F>0.1) or discordance between genotyped and clinically recorded sex were excluded. Variants from sex and mitochondrial chromosomes and somatic variants with minor allele frequency of <0.1% were also excluded. 10.1136/lupus-2022-000843.supp1 Supplementary data After quality control, 542 524 variants were available for imputation. The data were then prephased to infer underlying haplotypes with the 1000 Genomes phase III reference panel using SHAPEIT V.2.79, and whole-genome imputation was performed on the prephased haplotypes using IMPUTE V.2.3.2. To filter for variants of high imputation accuracy, only those with an information score of >0.9 were retained. Genetic load The genetic load was calculated for 472 subjects based on previously identified SLE-associated SNPs with genome-wide significance in the European population. Of the 123 variants meeting tier 1 statistical significance (p>5×10 −8 and P FDR <0.05), 99 met postimputation quality control and were included for genetic load calculation . Unweighted genetic loads were calculated as the total sum of risk alleles for each individual. Weighted genetic loads were defined as the sum of risk alleles multiplied by the beta coefficient (the natural logarithm of the previously published OR of each risk allele for SLE susceptibility). If the beta coefficient was negative, the count for the reverse coded allele and the inverse OR was used. Statistical analysis The genetic load was compared using Kruskal-Wallis with Dunn’s post hoc test for multiple corrections. Statistical comparisons and receiver operator characteristic (ROC) analysis were performed using GraphPad Prism V.8.3.1. ORs were computed using Excel V.14.6.9, comparing individuals with a specific weighted genetic load (±2) with those within the lowest 10%. P values less than 0.05 were considered statistically significant.
European American patients with SLE (n=170) or ILE (n=169) and healthy controls with no self-reported lupus manifestations (n=133) were selected from existing collections in the Arthritis & Clinical Immunology Biorepository (CAP# 9418302) at the Oklahoma Medical Research Foundation. Demographic information was self-reported. Participants with SLE or ILE were characterised by a systematised medical records review for SLE classification criteria. ILE was defined as three ACR 1997 criteria and SLE as four or more ACR-1997 criteria. Patients with ILE by ACR who also did not meet SLICC classification criteria were considered ILE SLICC . All individuals with ILE were previously enrolled in the Lupus Family Registry and Repository (LFRR) (1995–2012). Healthy controls with no documented lupus manifestations were also previously enrolled in the LFRR or from the Oklahoma Immune Cohort through the Oklahoma Rheumatic Disease Research Cores Centre collections.
Samples were genotyped on the Infinium Global Screening Array-24 V.2.0 (Illumina, San Diego, California, USA), with 665 608 variants genotyped per sample. With consulting support from Rancho BioSciences (San Diego, California, usa), quality control was performed at the sample and variant level in PLINK V.2.0 (V.1.90) . Samples with call rates below 90%, extreme heterozygosity measured by Wrights inbreeding coefficient (F<−0.05||F>0.1) or discordance between genotyped and clinically recorded sex were excluded. Variants from sex and mitochondrial chromosomes and somatic variants with minor allele frequency of <0.1% were also excluded. 10.1136/lupus-2022-000843.supp1 Supplementary data After quality control, 542 524 variants were available for imputation. The data were then prephased to infer underlying haplotypes with the 1000 Genomes phase III reference panel using SHAPEIT V.2.79, and whole-genome imputation was performed on the prephased haplotypes using IMPUTE V.2.3.2. To filter for variants of high imputation accuracy, only those with an information score of >0.9 were retained.
The genetic load was calculated for 472 subjects based on previously identified SLE-associated SNPs with genome-wide significance in the European population. Of the 123 variants meeting tier 1 statistical significance (p>5×10 −8 and P FDR <0.05), 99 met postimputation quality control and were included for genetic load calculation . Unweighted genetic loads were calculated as the total sum of risk alleles for each individual. Weighted genetic loads were defined as the sum of risk alleles multiplied by the beta coefficient (the natural logarithm of the previously published OR of each risk allele for SLE susceptibility). If the beta coefficient was negative, the count for the reverse coded allele and the inverse OR was used.
The genetic load was compared using Kruskal-Wallis with Dunn’s post hoc test for multiple corrections. Statistical comparisons and receiver operator characteristic (ROC) analysis were performed using GraphPad Prism V.8.3.1. ORs were computed using Excel V.14.6.9, comparing individuals with a specific weighted genetic load (±2) with those within the lowest 10%. P values less than 0.05 were considered statistically significant.
Study population To assess the impact of known SLE genetic associations on ILE susceptibility, we compared the genetic load of a set of 99 previously described SLE risk variants in European American patients with ILE (n=169), patients with SLE (n=170) and unaffected controls (n=133) . Due to the low numbers of subjects from other races in the ILE cohort and challenges with combining race-specific genetic load information, we elected not to attempt any other race-specific genetic load comparisons. A similar frequency of childhood onset was observed in patients with ILE (7.1%) and patients with SLE (3.7%) . As expected, the total number of ACR criteria met per patient was higher in patients with SLE (mean 5.7) than in patients with ILE (mean 3, p value <0.0001) . In addition, the frequency of patients who met malar or discoid rash, photosensitivity, oral or nasal ulcers, arthritis, serositis, renal disease, and neurological or haematological ACR criteria was significantly higher in patients with SLE compared with patients with ILE; however, the frequency who met immunological or antinuclear antibody criteria was similar between the two groups . Patients with ILE exhibit a similar increased SLE risk allele genetic load as patients with SLE Consistent with previous findings, European American patients with SLE exhibited significantly greater unweighted and weighted genetic loads compared with healthy controls ( , and ). Unweighted and weighted genetic loads were also higher in European American patients with ILE compared with healthy controls and did not differ from patients with SLE . We next stratified the patients with ILE based on SLICC criteria, which are more sensitive compared with ACR criteria. A similar trend was observed in ILE SLICC patients (n=119) compared with patients with SLE ( , B, and ), suggesting a comparable genetic load in patients with ILE and SLE irrespective of the classification criteria used. To understand how SLE risk allele genetic load influenced the odds of disease in an individual, we calculated ORs comparing individuals with a given weighted genetic load (±2.0) with those within the lowest 10%. The probability of disease increased with increasing weighted genetic load for patients with SLE, ILE and ILE SLICC compared with healthy controls . Specifically, those with a weighted genetic load of 19 (±2.0) or higher showed greater odds of developing SLE or ILE compared with healthy controls . However, the odds of developing SLE compared with ILE did not change with increasing weighted genetic load . Similarly, higher genetic load differentiated patients with ILE and SLE from controls (area under the curve=0.62 for both) but not patients with ILE from patients with SLE (area under the curve=0.51, p=0.78) by ROC analysis .
To assess the impact of known SLE genetic associations on ILE susceptibility, we compared the genetic load of a set of 99 previously described SLE risk variants in European American patients with ILE (n=169), patients with SLE (n=170) and unaffected controls (n=133) . Due to the low numbers of subjects from other races in the ILE cohort and challenges with combining race-specific genetic load information, we elected not to attempt any other race-specific genetic load comparisons. A similar frequency of childhood onset was observed in patients with ILE (7.1%) and patients with SLE (3.7%) . As expected, the total number of ACR criteria met per patient was higher in patients with SLE (mean 5.7) than in patients with ILE (mean 3, p value <0.0001) . In addition, the frequency of patients who met malar or discoid rash, photosensitivity, oral or nasal ulcers, arthritis, serositis, renal disease, and neurological or haematological ACR criteria was significantly higher in patients with SLE compared with patients with ILE; however, the frequency who met immunological or antinuclear antibody criteria was similar between the two groups .
Consistent with previous findings, European American patients with SLE exhibited significantly greater unweighted and weighted genetic loads compared with healthy controls ( , and ). Unweighted and weighted genetic loads were also higher in European American patients with ILE compared with healthy controls and did not differ from patients with SLE . We next stratified the patients with ILE based on SLICC criteria, which are more sensitive compared with ACR criteria. A similar trend was observed in ILE SLICC patients (n=119) compared with patients with SLE ( , B, and ), suggesting a comparable genetic load in patients with ILE and SLE irrespective of the classification criteria used. To understand how SLE risk allele genetic load influenced the odds of disease in an individual, we calculated ORs comparing individuals with a given weighted genetic load (±2.0) with those within the lowest 10%. The probability of disease increased with increasing weighted genetic load for patients with SLE, ILE and ILE SLICC compared with healthy controls . Specifically, those with a weighted genetic load of 19 (±2.0) or higher showed greater odds of developing SLE or ILE compared with healthy controls . However, the odds of developing SLE compared with ILE did not change with increasing weighted genetic load . Similarly, higher genetic load differentiated patients with ILE and SLE from controls (area under the curve=0.62 for both) but not patients with ILE from patients with SLE (area under the curve=0.51, p=0.78) by ROC analysis .
This study is the first to determine the genetic load of SLE risk alleles and unique risk variants in ILE. Although patients with ILE exhibit a milder phenotype compared with SLE, the genetic load of SLE risk alleles in patients with ILE was indistinguishable from patients with SLE, suggesting a similar genetic predisposition. However, it is unknown if there may be unique risk or protective variants associated with a subgroup of patients with ILE who never progress to SLE classification. Previous studies in patients with SLE found that a higher genetic load is associated with a more severe SLE disease phenotype, including a higher frequency of renal disease. As the patients with SLE in our cohort met more ACR criteria, including a higher frequency of renal disease, compared with patients with ILE, it is surprising that the genetic load is similar between the two groups. However, compared with other studies, we calculated genetic load based on the largest number of European SLE risk loci, which may be more inclusive of patients with less severe disease. Genetic load also correlates with earlier disease onset in patients with SLE, indicative of higher disease severity. In our study, the frequency of childhood-onset disease was low and similar in a subset of both patients with ILE and patients with SLE, which may contribute to the similar genetic load. As patients with ILE are often older compared with patients with SLE, patients with SLE with childhood onset may exhibit increased genetic load compared with patients with ILE. Our study has some limitations. We were unable to examine race-specific genetic load differences between patients with SLE and ILE and healthy controls due to the low numbers of subjects in the racial subgroups. Therefore, replication in larger race-matched cohorts and subsequent transancestral meta-analysis is imperative. Furthermore, as we only had age at onset information for a subset of patients, it is unclear whether age at onset contributed to the similar genetic load. Together, our data support an enhanced genetic predisposition towards ILE similar to SLE through aggregate genetic variants. Future studies in larger, longitudinal preclinical cohorts are needed to determine whether the phenotypical differences between SLE and ILE are governed by novel ILE genetic variants or disparate environmental or gene–environmental factors.
|
Human-interpretable image features derived from densely mapped cancer pathology slides predict diverse molecular phenotypes | 362d1d27-917e-4130-8d20-72504b7f72d9 | 7955068 | Pathology[mh] | While manual microscopic inspection of histopathology slides remains the gold standard for evaluating the malignancy, subtype, and treatment options for cancer , pathologists and oncologists increasingly rely on molecular assays to guide personalization of cancer therapy . These assays can be expensive and time-consuming and, unlike histopathology images, are not routinely collected, limiting their use in retrospective and exploratory research. Manual histological evaluation, on the other hand, presents several clinical challenges. Careful inspection requires significant time investment by board-certified anatomic pathologists and is often insufficient for prognostic prediction. Several evaluative tasks, including diagnostic classification, have also reported low inter-rater agreement across experts and low intra-rater agreement across multiple reads by the same expert , . Furthermore, manual assessment of the expression of specific genes from histopathology has not to our knowledge been demonstrated. Modern computer vision methods present the potential for rapid, reproducible, and cost-effective clinical and molecular predictions. Over the past decade, the quantity and resolution of digitized histology slides has dramatically improved . At the same time, the field of computer vision has made significant strides in pathology image analysis , , including automated prediction of tumor grade , mutational subtypes , and gene expression signatures across cancer types – . In addition to achieving diagnostic sensitivity and specificity metrics that match or exceed those of human pathologists – , automated computational pathology can also scale to service resource-constrained settings where few pathologists are available. As a result, there may be opportunities to integrate these technologies into the clinical workflows of developing countries . However, end-to-end deep learning models that infer outputs directly from raw images present significant risks for clinical settings, including fragility of machine learning models to population shift between training and real-world application, technical variability in sample preparation and analysis, and other unpredictable failure modes – . Many of these risks stem from lack of interpretability of “black-box” models , . “Black-box” model predictions are difficult for users to interrogate and understand, leading to user distrust and inability to diagnose failure modes or identify reliance on confounding correlates. Without reliable means for understanding when and how vulnerabilities may become failures, computational methods may face difficulty achieving widespread adoption in clinical settings , . One emerging solution has been the automated computation of human-interpretable image features (HIFs) to predict clinical outcomes. HIF-based prediction models often mirror the pathology workflow of searching for distinctive, stage-defining features under a microscope and offer opportunities for pathologists to validate intermediate steps and identify failure points. In addition, HIF-based solutions enable incorporation of histological knowledge and expert pixel-level annotations, which increases predictive power. Studied HIFs span a wide range of visual features, including stromal morphological structures , cell and nucleus morphologies , shapes and sizes of tumor regions , tissue textures , and the spatial distributions of tumor-infiltrating lymphocytes (TILs) , . In recent years, the relationship between the tumor microenvironment (TME) and patient response to targeted therapies has been made increasingly clear , . For instance, immuno-supportive phenotypes, which exhibit greater baseline antitumor immunity and improved immunotherapy response, have been linked to the presence of TILs and elevated expression of programmed death-ligand 1 (PD-L1) on tumor-associated immune cells. In contrast, immuno-suppressive phenotypes have been linked to the presence of tumor-associated macrophages and fibroblasts, as well as reduced PD-L1 expression – . HIF-based approaches have the potential to provide an interpretable window into the composition and spatial architecture of the TME in a manner complementary to conventional genomic approaches. While prior HIF-based studies have identified many useful feature classes, most have been limited in scope. Studies to date often involve a single cell or tissue type; none have explored features that combine both cell and tissue properties. In addition, the majority of reported HIFs have only been vetted on a single cancer type, often non-small cell lung cancer (NSCLC). In this research study, we present a computational pathology pipeline that can integrate high-resolution cell- and tissue-level information from whole-slide images (WSIs) to predict treatment-relevant, molecularly derived phenotypes across five different cancer types. Our approach combines the predictive power of deep learning with the interpretability of HIFs, which enables explicit incorporation of prior knowledge and achieves performance comparable to end-to-end models. We introduce a diverse collection of 607 HIFs ranging from simple cell (e.g., density of lymphocytes in cancer tissue) and tissue quantities (e.g., area of necrotic tissue) to complex spatial features capturing tissue architecture, tissue morphology, and cell–cell proximity. In this study, we demonstrate that such features can generalize across cancer types and provide a quantitative and interpretable link to specific and biologically relevant characteristics of each TME. Dataset characteristics and fully automated pipeline design In order to test our approach on a diverse array of histopathology images, we obtained 2917 hematoxylin and eosin (H&E)-stained, formalin-fixed, and paraffin-embedded (FFPE) WSIs from The Cancer Genome Atlas (TCGA), corresponding to 2634 distinct patients. These images, each scanned at either ×20 or ×40 magnification, represented patients with skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), breast cancer (BRCA), lung adenocarcinoma (LUAD), and lung squamous cell carcinoma (LUSC) from 95 distinct clinical sites. These five cancer types were selected given their relevance to immuno-oncology therapies and their image availability in TCGA. We summarize the characteristics of TCGA patients in Supplementary Table . To supplement the TCGA analysis cohort, we obtained 4158 additional WSIs for the five cancer types to improve model robustness. To maximize capture of this information, we excluded images ( n = 91, 3.1%) if they failed basic quality control checks as determined by expert pathologists. Criteria for quality control were limited to mislabeling of cancer type, excessive blur, or insufficient staining. For both TCGA and additional WSIs, we collected cell- and tissue-level annotations from a network of pathologists, amounting to >1.4 million cell-type point annotations and >200,000 tissue-type region annotations (Supplementary Table ). We used the resulting slides and annotations to design a fully automated pipeline to extract HIFs from these slides (summarized in Fig. ). First, we trained deep learning models for cell detection (cell-type models) and tissue region segmentation (tissue-type models). Training and validation of models was conducted on a development set of 1561 TCGA WSIs, supplemented by the 4158 additional WSIs ( n = 5719) (Fig. ). Next, we exhaustively generated cell- and tissue-type model predictions for 2826 TCGA WSIs, which were then used to compute a diverse array of HIFs for each WSI. Finally, we trained classical linear machine learning models to predict treatment-relevant molecular expression phenotypes using these HIFs. Cell- and tissue-type model development and evaluation In the first step of our pipeline, we trained two convolutional neural networks (CNNs) per cancer type: (1) tissue-type models trained to segment cancer tissue, cancer-associated stroma (CAS), and necrotic tissue regions and (2) cell-type models trained to detect lymphocytes, plasma cells, fibroblasts, macrophages, and cancer cells. These models were improved iteratively through a series of quality control steps, including significant input from board-certified pathologists (“Methods”). These CNNs were then used to exhaustively generate cell-type labels and tissue-type segmentations for each WSI. We visualized these predictions as colored heatmaps projected onto the original WSIs (Fig. and Supplementary Fig. ). Throughout model development, we tracked accuracy metrics on a comprehensively annotated validation dataset (Supplementary Fig. ). To directly compare the quality of our cell-type model predictions against pathologist annotation, we generated 250 75 × 75 μm frames of cell-type overlays evenly sampled across the 5 cancer types and 5 cell types, each from a distinct WSI. These frames were then annotated for each of the five cell types by multiple external board-certified pathologists, enabling us to compare cell-type counts as predicted by our CNN cell-type model against pathologist annotation counts. We observed that Pearson correlations between cell-type model predictions and pathologist consensus were comparable to inter-pathologist correlation (differences in correlation ranged from −0.019 to 0.024, with a median absolute difference of 0.069) across the five cell types (Supplementary Fig. ). Model versus pathologist consensus and inter-pathologist correlations were both strong (>0.8) for cancer cells and lymphocytes and moderate (approximately 0.4–0.7) for plasma cells, macrophages, and fibroblasts. To assess model generalizability, we redeployed our BRCA cell-type model to predict cell types on H&E, FFPE WSIs from an external BRCA dataset uploaded by Peikari et al. to The Cancer Imaging Archive (TCIA) . We then repeated the same frame analysis framework using 250 frames evenly sampled across the five cell types, which revealed robust concordance between our cell-type model and pathologist consensus in these external WSIs (Supplementary Fig. ). Correlation coefficients ranged from 0.607 in macrophages to 0.926 in lymphocytes and differed from inter-pathologist correlation by a median absolute difference of 0.076. As a benchmark, inter-pathologist correlation represents the optimal performance that can be expected from models trained and evaluated using pathologist annotations as the ground truth. External data were not publicly available for the remaining cancer types. While the BRCA cell-type model generalized without additional tuning, other models may require retraining when applied to new datasets. Cell- and tissue-type predictions yield a wide spectrum of HIFs When quantified, our cell- and tissue-type predictions capture broad multivariate information about the spatial distribution of cells and tissues in each slide. Specifically, we used model predictions to extract 607 HIFs (Fig. ), which can be understood in terms of 6 categories (Fig. ). The first category includes cell-type counts and densities across different tissue regions (e.g., density of plasma cells in cancer tissue; Fig. ). The next category includes cell-level cluster features that capture inter-cellular spatial relationships, such as cluster dispersion, size, and extent (e.g., mean cluster size of fibroblasts in CAS; Fig. ). The third category captures cell-level proportion and proximity features, such as the proportional count of lymphocytes versus fibroblasts within 80 microns (μm) of the cancer–stroma interface (CSI; Fig. ). The fourth category includes tissue area (e.g., mm 2 of necrotic tissue) and multiplicity counts (e.g., number of significant regions of cancer tissue) (Fig. ). The fifth category includes tissue architecture features, such as the average solidity (solidness) of cancer tissue regions or the fractal dimension (geometrical complexity) of CAS (Fig. ). The final category captures tissue-level morphology using metrics such as perimeter 2 over area (shape roughness), lacunarity (gappiness), and eccentricity (Fig. ). This broad enumeration of biologically relevant HIFs explores a wide range of mechanisms underlying histopathology across diverse cancer types. HIFs capture sufficient information to stratify cancer types To visualize the global structure of the HIF feature matrix, we used Uniform Manifold Approximation and Projection (UMAP) , to reduce the 607-dimensional HIF space into two dimensions (Fig. ). The two-dimensional (2-D) manifold projection of HIFs was able to separate BRCA, SKCM, and STAD into distinct clusters, while merging NSCLC subtypes LUAD and LUSC into one overlapping cluster (V-measure score = 0.47 using k -means with k = 4). Cancer-type differences could be traced to specific and interpretable cell- and tissue-level features within the TME (Fig. ). SKCM samples exhibited higher densities of cancer cells in CAS (pan-cancer median Z -score = 0.55, P < 10 −30 ) and greater cancer tissue area per slide ( Z -score = 0.72, P < 10 −30 ) relative to other cancer types. These findings reflect biopsy protocols for SKCM, in which the excised region involves predominantly cancer tissue and minimal normal tissue. NSCLC subtypes LUAD and LUSC exhibited higher densities of macrophages in CAS ( Z -score = 0.54 and 0.91, respectively; P < 10 −30 ), reflecting the large population of macrophages infiltrating alveolar and interstitial compartments during lung inflammation . NSCLC subtypes also exhibited higher densities of plasma cells ( Z -score = 0.61 and 0.49; P < 10 −30 ) in CAS, in agreement with prior findings in which proliferating B cells were observed in ~35% of lung cancers , . STAD exhibited the highest density of lymphocytes in CAS ( Z -score = 0.11, P = 2.16 × 10 −19 ), corroborating prior work that identified STAD as having the largest fraction of TIL-positive patches per WSI among 13 TCGA cancer types, including the 5 examined here . Notably, HIFs are able to stratify cancer types by known histological differences without explicit tuning for cancer-type detection, as is required by “black box” approaches. In a stratified analysis, SKCM metastatic and primary tumor samples also exhibited notable differences, including a greater average solidity and area of cancer tissue among metastatic tumors (Supplementary Fig. ). Considering spatial heterogeneity, we observed an enrichment of lymphocytes and plasma cells in SKCM as well as an enrichment of cancer cells in LUSC and LUAD at the CSI relative to in cancer tissue plus CAS (CT + CAS) (Supplementary Fig. ). HIFs are concordant with sequencing-based cell and immune signature quantifications To further validate our deep learning-based cell quantifications, we compared the abundance of the same cell type predicted by our cell-type models with those based on RNA sequencing (RNA-Seq) . Image-based cell quantifications were correlated with sequencing-based quantifications across all patient samples and cancer types (pan-cancer) in three cell types (Supplementary Fig. ): leukocyte fraction (Spearman correlation coefficient ( ρ ) = 0.55, P < 2.2 × 10 −16 ), lymphocyte fraction ( ρ = 0.42, P < 2.2 × 10 −16 ), and plasma cell fraction ( ρ = 0.40, P < 2.2 × 10 −16 ). Notably, imperfect correlation is expected as tissue samples used for RNA-Seq and histology imaging are extracted from different portions of the patient’s tumor and thus vary in TME due to spatial heterogeneity. There is significant correlation structure among individual HIFs due to the modular process by which feature sets are generated, as well as inherent correlations in underlying biological phenomena. For example, proportion, density, and spatial features of a given cell or tissue type all rely on the same underlying model predictions. In order to identify mechanistically relevant and inter-correlated groups of HIFs, hierarchical agglomerative clustering was conducted (“Methods”; Supplementary Data ). This clustering also increases the power of multiple-hypothesis-testing corrections by accounting for feature correlation . Pan-cancer HIF clusters strongly correlated with immune signatures of leukocyte infiltration, immunoglobulin G (IgG) expression, transforming growth factor (TGF)-β expression, and wound healing (Fig. ), as well as angiogenesis and hypoxia (Supplementary Fig. ), all quantified by scoring bulk RNA-Seq reads for known immune and gene expression signatures – . We conducted the same correlational analysis for each cancer type individually and observed high concordance among the top correlated HIF clusters per immune signature (Supplementary Table ). Molecular quantification of leukocyte infiltration was concordant with the density of leukocyte-lineage cells in CT + CAS quantified by our deep learning pipeline, including lymphocytes (median absolute Spearman correlation ρ for associated HIF cluster = 0.48, P < 10 −30 ; Fig. ), plasma cells (cluster ρ = 0.46, P < 10 −30 ), and macrophages (cluster ρ = 0.40, P < 10 −30 ). Similarly, we observed associations between IgG expression and the density of leukocyte-lineage cells in CT + CAS, with plasma cells being the most strongly correlated (cluster ρ = 0.58, P < 10 −30 ), as expected given their role in producing Igs (Fig. ). TGF-β expression was associated with the density of fibroblasts in CT + CAS (cluster ρ = 0.28, P < 10 −30 ; Fig. ), building upon prior studies which found that TGF-β1 can promote fibroblast proliferation – . Interestingly, recent studies in breast and ovarian cancer have highlighted the role of several subsets of cancer-associated fibroblasts in promoting an immunosuppressive environment resistant to anti-programmed cell death protein 1 (anti-PD-1) therapy, including one subset associated with the TGF-β signaling pathway . TGF-β expression was also correlated with the area of CAS relative to CT + CAS (cluster ρ = 0.31, P < 10 −30 ), shedding further light on the role of stromal proteins in modulating TGF-β levels . The wound healing signature was positively associated with the density of fibroblasts in CAS versus in cancer tissue (cluster ρ = 0.29, P < 10 −30 ; Fig. ), which corroborates findings that both tumors and healing wounds alike modulate fibroblast recruitment and proliferation to facilitate extracellular matrix deposition . H&E snapshots corresponding to high expression of each of the four immune signatures are shown in Fig. with corresponding cell-type heatmaps overlaid. The angiogenesis signature was positively associated with the density of fibroblasts (cluster ρ = 0.32, P < 10 −30 ) and macrophages (cluster ρ = 0.31, P < 10 −30 ) in CAS, corroborating the critical role that fibroblasts and macrophages play in modulating extracellular matrix components to promote neovascularization , . Interestingly, angiogenesis signature was also associated with the area of CAS relative to CT + CAS (cluster ρ = 0.29, P < 10 −30 ), reflecting the importance of stromal cell populations (Supplementary Fig. ). The hypoxia signature was most strongly associated with area of necrotic tissue (cluster ρ = 0.45, P < 10 −30 ), as expected by their causal relationship (Supplementary Fig. ). Hypoxia was also associated with density of plasma cells in CAS (cluster ρ = 0.36, P < 10 −30 ), which confirms prior findings of increased plasma cell generation under hypoxic conditions . While many associations noted above have been previously identified using experimental methods, a HIF-based approach enables validation and systematic quantification of the strength of such associations. HIFs are predictive of clinically relevant phenotypes To evaluate the capability of HIFs to predict expression of clinically relevant, immuno-modulatory genes, we conducted supervised prediction of binarized classes for five clinically relevant phenotypes: (1) PD-1 expression, (2) PD-L1 expression, (3) cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) expression, (4) homologous recombination deficiency (HRD) score, and (5) T cell immunoreceptor with Ig and ITIM domains (TIGIT) expression (Fig. and Supplementary Fig. ). Using the 607 HIFs computed per WSI, predictions were conducted for cancer types individually as well as pan-cancer. SKCM predictions were conducted only for TIGIT expression due to insufficient sample sizes for the remainder of outcomes (“Methods”). To demonstrate model generalizability across varying patient demographics and sample collection processes, area under the receiver operating characteristic (AUROC) and area under the precision-recall curve (AUPRC) performance metrics were computed on hold-out sets composed exclusively of patient samples derived from tissue source sites not seen in the training sets (Supplementary Table ). HIF-based models were not predictive for every phenotype in each cancer type (hold-out AUROC < 0.6; see Supplementary Table for all results including negatives). In the successful prediction models (hold-out AUROC range = 0.601–0.864; Fig. ), precision-recall curves revealed that models were robust to class imbalance, achieving AUPRC performance surpassing positive class prevalence by 0.104–0.306 (Supplementary Fig. ). On average across molecular phenotype prediction tasks, AUROC hold-out performance of our HIF-based linear models was comparable to that achieved by end-to-end deep learning models trained using the same architecture and hyper-parameters from Kather et al. (Supplementary Table ) . Differences in AUROC ranged from −0.16 to 0.25, with a median absolute difference of 0.065. Given the small sample sizes, HIF-based models are potentially better statistically powered. Indeed, HIF-based models outperformed end-to-end models in several prediction tasks, including most notably SKCM prediction of TIGIT expression, which boasted the smallest sample size. AUROC performance of our HIF-based linear model for PD-L1 expression in LUAD trained on roughly 300 WSIs was also comparable to that achieved by previously published “black-box” deep learning models trained on hundreds of thousands of paired H&E and PD-L1 example patches in NSCLC . While our HIF generation process explicitly encodes for interactions between biological entities (e.g., count of lymphocytes within 80 μm of fibroblasts), we also compared and achieved comparable hold-out AUROC and AUPRC performance between our HIF-based linear models against HIF-based random forest models, which directly account for interaction effects between HIFs (Supplementary Table ). Predictive HIFs provide interpretable link to clinically relevant phenotypes Interpretable features enable interrogation and further validation of model parameters as well as generation of biological hypotheses. Toward this end, for each prediction task we identified the five most important HIF clusters as determined by magnitude of model coefficients (Fig. and Supplementary Fig. ) and computed cluster-level P values to evaluate significance (Supplementary Table ; “Methods”). As expected, prediction of PD-1 and PD-L1 involved similar HIF clusters (Pearson correlation between PD-1 and PD-L1 expression = 0.53; Supplementary Fig. ). For example, the extent of tumor inflammation, as measured by the count of cancer cells within 80 μm of lymphocytes, as well as the density of lymphocytes in CT + CAS, was significantly selected during model fitting for both of PD-1 and PD-L1 expression in pan-cancer and BRCA models (Fig. and Supplementary Fig. ). Furthermore, in both LUAD and LUSC, the count of lymphocytes in CT + CAS was similarly predictive of PD-1 and PD-L1 expression. The importance of these HIFs that capture lymphocyte infiltration between and surrounding cancer cells corroborates prior literature, which demonstrated that TILs correlated strongly with higher expression levels of PD-1 and PD-L1 in early BRCA and NSCLC , . The area, morphology, or multiplicity of necrotic tissue proved predictive of PD-1 expression in LUAD, LUSC, and STAD models and of PD-L1 expression in pan-cancer, BRCA, and LUAD models, expanding upon prior findings that tumor necrosis correlated positively with PD-1 and PD-L1 expression in LUAD . The density, proximity, or clustering properties of plasma cells was predictive of PD-1 expression in all models excluding LUAD, suggesting a role for plasma cells in modulating PD-1 expression. Recent studies in SKCM, renal cell carcinoma, and soft-tissue sarcoma have demonstrated that an enrichment of B cells in tertiary lymphoid structures was positively predictive of response to immune checkpoint blockade therapy – . The density of fibroblasts in CAS or within 80 μm of the CSI was predictive of PD-L1 expression in LUAD and STAD, respectively, corroborating earlier discoveries that cancer-associated fibroblasts promote PD-L1 expression . Less is known about the relationship between the TME and CTLA-4 expression. By investigating predictive HIFs, we can begin to enumerate features of the TME that correlate with CTLA-4 expression. The proximity of lymphocytes to cancer cells (pan-cancer and BRCA), morphology of necrotic regions (LUAD and LUSC), and density of cancer cells in CT + CAS versus exclusively in CAS (BRCA and STAD) were predictive of CTLA-4 expression across multiple models (Fig. and Supplementary Fig. ). Area of necrotic tissue (pan-cancer and BRCA) as well as various morphological properties of necrotic regions including perimeter and lacunarity (BRCA and STAD) was predictive of HRD (Fig. and Supplementary Fig. ). In HRD, ineffective DNA damage repair can result in the accumulation of severe DNA damage and subsequent cell death through apoptosis as well as necrosis , . The density and count of fibroblasts near or in CAS was also predictive of HRD in the pan-cancer and BRCA models, corroborating prior findings that persistent DNA damage and subsequent accumulation of unrepaired DNA strand breaks can induce reprogramming of normal fibroblasts into cancer-associated fibroblasts . Like the three other immune checkpoint proteins (PD-1, PD-L1, and CTLA-4), TIGIT expression was also associated with markers of tumor inflammation, including the count of cancer cells within 80 μm of lymphocytes (pan-cancer and BRCA), the total number of lymphocytes in CT + CAS (pan-cancer and BRCA), and the proportional count of lymphocytes to cancer cells within 80 μm of the CSI (LUAD) (Fig. and Supplementary Fig. ). These findings corroborate prior findings that TIGIT expression, alongside PD-1 and PD-L1 expression (Pearson correlation between TIGIT and PD-1 = 0.84; TIGIT and PD-L1 = 0.56; Supplementary Fig. ), is correlated with TILs . HIF clusters capturing morphology and architecture of necrotic tissue (e.g., fractal dimension, lacunarity, extent, perimeter 2 /area) were associated with TIGIT expression in LUAD, LUSC, SKCM, and STAD models, although these relationships have yet to be investigated. In order to test our approach on a diverse array of histopathology images, we obtained 2917 hematoxylin and eosin (H&E)-stained, formalin-fixed, and paraffin-embedded (FFPE) WSIs from The Cancer Genome Atlas (TCGA), corresponding to 2634 distinct patients. These images, each scanned at either ×20 or ×40 magnification, represented patients with skin cutaneous melanoma (SKCM), stomach adenocarcinoma (STAD), breast cancer (BRCA), lung adenocarcinoma (LUAD), and lung squamous cell carcinoma (LUSC) from 95 distinct clinical sites. These five cancer types were selected given their relevance to immuno-oncology therapies and their image availability in TCGA. We summarize the characteristics of TCGA patients in Supplementary Table . To supplement the TCGA analysis cohort, we obtained 4158 additional WSIs for the five cancer types to improve model robustness. To maximize capture of this information, we excluded images ( n = 91, 3.1%) if they failed basic quality control checks as determined by expert pathologists. Criteria for quality control were limited to mislabeling of cancer type, excessive blur, or insufficient staining. For both TCGA and additional WSIs, we collected cell- and tissue-level annotations from a network of pathologists, amounting to >1.4 million cell-type point annotations and >200,000 tissue-type region annotations (Supplementary Table ). We used the resulting slides and annotations to design a fully automated pipeline to extract HIFs from these slides (summarized in Fig. ). First, we trained deep learning models for cell detection (cell-type models) and tissue region segmentation (tissue-type models). Training and validation of models was conducted on a development set of 1561 TCGA WSIs, supplemented by the 4158 additional WSIs ( n = 5719) (Fig. ). Next, we exhaustively generated cell- and tissue-type model predictions for 2826 TCGA WSIs, which were then used to compute a diverse array of HIFs for each WSI. Finally, we trained classical linear machine learning models to predict treatment-relevant molecular expression phenotypes using these HIFs. In the first step of our pipeline, we trained two convolutional neural networks (CNNs) per cancer type: (1) tissue-type models trained to segment cancer tissue, cancer-associated stroma (CAS), and necrotic tissue regions and (2) cell-type models trained to detect lymphocytes, plasma cells, fibroblasts, macrophages, and cancer cells. These models were improved iteratively through a series of quality control steps, including significant input from board-certified pathologists (“Methods”). These CNNs were then used to exhaustively generate cell-type labels and tissue-type segmentations for each WSI. We visualized these predictions as colored heatmaps projected onto the original WSIs (Fig. and Supplementary Fig. ). Throughout model development, we tracked accuracy metrics on a comprehensively annotated validation dataset (Supplementary Fig. ). To directly compare the quality of our cell-type model predictions against pathologist annotation, we generated 250 75 × 75 μm frames of cell-type overlays evenly sampled across the 5 cancer types and 5 cell types, each from a distinct WSI. These frames were then annotated for each of the five cell types by multiple external board-certified pathologists, enabling us to compare cell-type counts as predicted by our CNN cell-type model against pathologist annotation counts. We observed that Pearson correlations between cell-type model predictions and pathologist consensus were comparable to inter-pathologist correlation (differences in correlation ranged from −0.019 to 0.024, with a median absolute difference of 0.069) across the five cell types (Supplementary Fig. ). Model versus pathologist consensus and inter-pathologist correlations were both strong (>0.8) for cancer cells and lymphocytes and moderate (approximately 0.4–0.7) for plasma cells, macrophages, and fibroblasts. To assess model generalizability, we redeployed our BRCA cell-type model to predict cell types on H&E, FFPE WSIs from an external BRCA dataset uploaded by Peikari et al. to The Cancer Imaging Archive (TCIA) . We then repeated the same frame analysis framework using 250 frames evenly sampled across the five cell types, which revealed robust concordance between our cell-type model and pathologist consensus in these external WSIs (Supplementary Fig. ). Correlation coefficients ranged from 0.607 in macrophages to 0.926 in lymphocytes and differed from inter-pathologist correlation by a median absolute difference of 0.076. As a benchmark, inter-pathologist correlation represents the optimal performance that can be expected from models trained and evaluated using pathologist annotations as the ground truth. External data were not publicly available for the remaining cancer types. While the BRCA cell-type model generalized without additional tuning, other models may require retraining when applied to new datasets. When quantified, our cell- and tissue-type predictions capture broad multivariate information about the spatial distribution of cells and tissues in each slide. Specifically, we used model predictions to extract 607 HIFs (Fig. ), which can be understood in terms of 6 categories (Fig. ). The first category includes cell-type counts and densities across different tissue regions (e.g., density of plasma cells in cancer tissue; Fig. ). The next category includes cell-level cluster features that capture inter-cellular spatial relationships, such as cluster dispersion, size, and extent (e.g., mean cluster size of fibroblasts in CAS; Fig. ). The third category captures cell-level proportion and proximity features, such as the proportional count of lymphocytes versus fibroblasts within 80 microns (μm) of the cancer–stroma interface (CSI; Fig. ). The fourth category includes tissue area (e.g., mm 2 of necrotic tissue) and multiplicity counts (e.g., number of significant regions of cancer tissue) (Fig. ). The fifth category includes tissue architecture features, such as the average solidity (solidness) of cancer tissue regions or the fractal dimension (geometrical complexity) of CAS (Fig. ). The final category captures tissue-level morphology using metrics such as perimeter 2 over area (shape roughness), lacunarity (gappiness), and eccentricity (Fig. ). This broad enumeration of biologically relevant HIFs explores a wide range of mechanisms underlying histopathology across diverse cancer types. To visualize the global structure of the HIF feature matrix, we used Uniform Manifold Approximation and Projection (UMAP) , to reduce the 607-dimensional HIF space into two dimensions (Fig. ). The two-dimensional (2-D) manifold projection of HIFs was able to separate BRCA, SKCM, and STAD into distinct clusters, while merging NSCLC subtypes LUAD and LUSC into one overlapping cluster (V-measure score = 0.47 using k -means with k = 4). Cancer-type differences could be traced to specific and interpretable cell- and tissue-level features within the TME (Fig. ). SKCM samples exhibited higher densities of cancer cells in CAS (pan-cancer median Z -score = 0.55, P < 10 −30 ) and greater cancer tissue area per slide ( Z -score = 0.72, P < 10 −30 ) relative to other cancer types. These findings reflect biopsy protocols for SKCM, in which the excised region involves predominantly cancer tissue and minimal normal tissue. NSCLC subtypes LUAD and LUSC exhibited higher densities of macrophages in CAS ( Z -score = 0.54 and 0.91, respectively; P < 10 −30 ), reflecting the large population of macrophages infiltrating alveolar and interstitial compartments during lung inflammation . NSCLC subtypes also exhibited higher densities of plasma cells ( Z -score = 0.61 and 0.49; P < 10 −30 ) in CAS, in agreement with prior findings in which proliferating B cells were observed in ~35% of lung cancers , . STAD exhibited the highest density of lymphocytes in CAS ( Z -score = 0.11, P = 2.16 × 10 −19 ), corroborating prior work that identified STAD as having the largest fraction of TIL-positive patches per WSI among 13 TCGA cancer types, including the 5 examined here . Notably, HIFs are able to stratify cancer types by known histological differences without explicit tuning for cancer-type detection, as is required by “black box” approaches. In a stratified analysis, SKCM metastatic and primary tumor samples also exhibited notable differences, including a greater average solidity and area of cancer tissue among metastatic tumors (Supplementary Fig. ). Considering spatial heterogeneity, we observed an enrichment of lymphocytes and plasma cells in SKCM as well as an enrichment of cancer cells in LUSC and LUAD at the CSI relative to in cancer tissue plus CAS (CT + CAS) (Supplementary Fig. ). To further validate our deep learning-based cell quantifications, we compared the abundance of the same cell type predicted by our cell-type models with those based on RNA sequencing (RNA-Seq) . Image-based cell quantifications were correlated with sequencing-based quantifications across all patient samples and cancer types (pan-cancer) in three cell types (Supplementary Fig. ): leukocyte fraction (Spearman correlation coefficient ( ρ ) = 0.55, P < 2.2 × 10 −16 ), lymphocyte fraction ( ρ = 0.42, P < 2.2 × 10 −16 ), and plasma cell fraction ( ρ = 0.40, P < 2.2 × 10 −16 ). Notably, imperfect correlation is expected as tissue samples used for RNA-Seq and histology imaging are extracted from different portions of the patient’s tumor and thus vary in TME due to spatial heterogeneity. There is significant correlation structure among individual HIFs due to the modular process by which feature sets are generated, as well as inherent correlations in underlying biological phenomena. For example, proportion, density, and spatial features of a given cell or tissue type all rely on the same underlying model predictions. In order to identify mechanistically relevant and inter-correlated groups of HIFs, hierarchical agglomerative clustering was conducted (“Methods”; Supplementary Data ). This clustering also increases the power of multiple-hypothesis-testing corrections by accounting for feature correlation . Pan-cancer HIF clusters strongly correlated with immune signatures of leukocyte infiltration, immunoglobulin G (IgG) expression, transforming growth factor (TGF)-β expression, and wound healing (Fig. ), as well as angiogenesis and hypoxia (Supplementary Fig. ), all quantified by scoring bulk RNA-Seq reads for known immune and gene expression signatures – . We conducted the same correlational analysis for each cancer type individually and observed high concordance among the top correlated HIF clusters per immune signature (Supplementary Table ). Molecular quantification of leukocyte infiltration was concordant with the density of leukocyte-lineage cells in CT + CAS quantified by our deep learning pipeline, including lymphocytes (median absolute Spearman correlation ρ for associated HIF cluster = 0.48, P < 10 −30 ; Fig. ), plasma cells (cluster ρ = 0.46, P < 10 −30 ), and macrophages (cluster ρ = 0.40, P < 10 −30 ). Similarly, we observed associations between IgG expression and the density of leukocyte-lineage cells in CT + CAS, with plasma cells being the most strongly correlated (cluster ρ = 0.58, P < 10 −30 ), as expected given their role in producing Igs (Fig. ). TGF-β expression was associated with the density of fibroblasts in CT + CAS (cluster ρ = 0.28, P < 10 −30 ; Fig. ), building upon prior studies which found that TGF-β1 can promote fibroblast proliferation – . Interestingly, recent studies in breast and ovarian cancer have highlighted the role of several subsets of cancer-associated fibroblasts in promoting an immunosuppressive environment resistant to anti-programmed cell death protein 1 (anti-PD-1) therapy, including one subset associated with the TGF-β signaling pathway . TGF-β expression was also correlated with the area of CAS relative to CT + CAS (cluster ρ = 0.31, P < 10 −30 ), shedding further light on the role of stromal proteins in modulating TGF-β levels . The wound healing signature was positively associated with the density of fibroblasts in CAS versus in cancer tissue (cluster ρ = 0.29, P < 10 −30 ; Fig. ), which corroborates findings that both tumors and healing wounds alike modulate fibroblast recruitment and proliferation to facilitate extracellular matrix deposition . H&E snapshots corresponding to high expression of each of the four immune signatures are shown in Fig. with corresponding cell-type heatmaps overlaid. The angiogenesis signature was positively associated with the density of fibroblasts (cluster ρ = 0.32, P < 10 −30 ) and macrophages (cluster ρ = 0.31, P < 10 −30 ) in CAS, corroborating the critical role that fibroblasts and macrophages play in modulating extracellular matrix components to promote neovascularization , . Interestingly, angiogenesis signature was also associated with the area of CAS relative to CT + CAS (cluster ρ = 0.29, P < 10 −30 ), reflecting the importance of stromal cell populations (Supplementary Fig. ). The hypoxia signature was most strongly associated with area of necrotic tissue (cluster ρ = 0.45, P < 10 −30 ), as expected by their causal relationship (Supplementary Fig. ). Hypoxia was also associated with density of plasma cells in CAS (cluster ρ = 0.36, P < 10 −30 ), which confirms prior findings of increased plasma cell generation under hypoxic conditions . While many associations noted above have been previously identified using experimental methods, a HIF-based approach enables validation and systematic quantification of the strength of such associations. To evaluate the capability of HIFs to predict expression of clinically relevant, immuno-modulatory genes, we conducted supervised prediction of binarized classes for five clinically relevant phenotypes: (1) PD-1 expression, (2) PD-L1 expression, (3) cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) expression, (4) homologous recombination deficiency (HRD) score, and (5) T cell immunoreceptor with Ig and ITIM domains (TIGIT) expression (Fig. and Supplementary Fig. ). Using the 607 HIFs computed per WSI, predictions were conducted for cancer types individually as well as pan-cancer. SKCM predictions were conducted only for TIGIT expression due to insufficient sample sizes for the remainder of outcomes (“Methods”). To demonstrate model generalizability across varying patient demographics and sample collection processes, area under the receiver operating characteristic (AUROC) and area under the precision-recall curve (AUPRC) performance metrics were computed on hold-out sets composed exclusively of patient samples derived from tissue source sites not seen in the training sets (Supplementary Table ). HIF-based models were not predictive for every phenotype in each cancer type (hold-out AUROC < 0.6; see Supplementary Table for all results including negatives). In the successful prediction models (hold-out AUROC range = 0.601–0.864; Fig. ), precision-recall curves revealed that models were robust to class imbalance, achieving AUPRC performance surpassing positive class prevalence by 0.104–0.306 (Supplementary Fig. ). On average across molecular phenotype prediction tasks, AUROC hold-out performance of our HIF-based linear models was comparable to that achieved by end-to-end deep learning models trained using the same architecture and hyper-parameters from Kather et al. (Supplementary Table ) . Differences in AUROC ranged from −0.16 to 0.25, with a median absolute difference of 0.065. Given the small sample sizes, HIF-based models are potentially better statistically powered. Indeed, HIF-based models outperformed end-to-end models in several prediction tasks, including most notably SKCM prediction of TIGIT expression, which boasted the smallest sample size. AUROC performance of our HIF-based linear model for PD-L1 expression in LUAD trained on roughly 300 WSIs was also comparable to that achieved by previously published “black-box” deep learning models trained on hundreds of thousands of paired H&E and PD-L1 example patches in NSCLC . While our HIF generation process explicitly encodes for interactions between biological entities (e.g., count of lymphocytes within 80 μm of fibroblasts), we also compared and achieved comparable hold-out AUROC and AUPRC performance between our HIF-based linear models against HIF-based random forest models, which directly account for interaction effects between HIFs (Supplementary Table ). Interpretable features enable interrogation and further validation of model parameters as well as generation of biological hypotheses. Toward this end, for each prediction task we identified the five most important HIF clusters as determined by magnitude of model coefficients (Fig. and Supplementary Fig. ) and computed cluster-level P values to evaluate significance (Supplementary Table ; “Methods”). As expected, prediction of PD-1 and PD-L1 involved similar HIF clusters (Pearson correlation between PD-1 and PD-L1 expression = 0.53; Supplementary Fig. ). For example, the extent of tumor inflammation, as measured by the count of cancer cells within 80 μm of lymphocytes, as well as the density of lymphocytes in CT + CAS, was significantly selected during model fitting for both of PD-1 and PD-L1 expression in pan-cancer and BRCA models (Fig. and Supplementary Fig. ). Furthermore, in both LUAD and LUSC, the count of lymphocytes in CT + CAS was similarly predictive of PD-1 and PD-L1 expression. The importance of these HIFs that capture lymphocyte infiltration between and surrounding cancer cells corroborates prior literature, which demonstrated that TILs correlated strongly with higher expression levels of PD-1 and PD-L1 in early BRCA and NSCLC , . The area, morphology, or multiplicity of necrotic tissue proved predictive of PD-1 expression in LUAD, LUSC, and STAD models and of PD-L1 expression in pan-cancer, BRCA, and LUAD models, expanding upon prior findings that tumor necrosis correlated positively with PD-1 and PD-L1 expression in LUAD . The density, proximity, or clustering properties of plasma cells was predictive of PD-1 expression in all models excluding LUAD, suggesting a role for plasma cells in modulating PD-1 expression. Recent studies in SKCM, renal cell carcinoma, and soft-tissue sarcoma have demonstrated that an enrichment of B cells in tertiary lymphoid structures was positively predictive of response to immune checkpoint blockade therapy – . The density of fibroblasts in CAS or within 80 μm of the CSI was predictive of PD-L1 expression in LUAD and STAD, respectively, corroborating earlier discoveries that cancer-associated fibroblasts promote PD-L1 expression . Less is known about the relationship between the TME and CTLA-4 expression. By investigating predictive HIFs, we can begin to enumerate features of the TME that correlate with CTLA-4 expression. The proximity of lymphocytes to cancer cells (pan-cancer and BRCA), morphology of necrotic regions (LUAD and LUSC), and density of cancer cells in CT + CAS versus exclusively in CAS (BRCA and STAD) were predictive of CTLA-4 expression across multiple models (Fig. and Supplementary Fig. ). Area of necrotic tissue (pan-cancer and BRCA) as well as various morphological properties of necrotic regions including perimeter and lacunarity (BRCA and STAD) was predictive of HRD (Fig. and Supplementary Fig. ). In HRD, ineffective DNA damage repair can result in the accumulation of severe DNA damage and subsequent cell death through apoptosis as well as necrosis , . The density and count of fibroblasts near or in CAS was also predictive of HRD in the pan-cancer and BRCA models, corroborating prior findings that persistent DNA damage and subsequent accumulation of unrepaired DNA strand breaks can induce reprogramming of normal fibroblasts into cancer-associated fibroblasts . Like the three other immune checkpoint proteins (PD-1, PD-L1, and CTLA-4), TIGIT expression was also associated with markers of tumor inflammation, including the count of cancer cells within 80 μm of lymphocytes (pan-cancer and BRCA), the total number of lymphocytes in CT + CAS (pan-cancer and BRCA), and the proportional count of lymphocytes to cancer cells within 80 μm of the CSI (LUAD) (Fig. and Supplementary Fig. ). These findings corroborate prior findings that TIGIT expression, alongside PD-1 and PD-L1 expression (Pearson correlation between TIGIT and PD-1 = 0.84; TIGIT and PD-L1 = 0.56; Supplementary Fig. ), is correlated with TILs . HIF clusters capturing morphology and architecture of necrotic tissue (e.g., fractal dimension, lacunarity, extent, perimeter 2 /area) were associated with TIGIT expression in LUAD, LUSC, SKCM, and STAD models, although these relationships have yet to be investigated. In recent years, fusion approaches that combine deep learning with feature engineering have gained traction – . Our study combines exhaustive deep learning-based cell- and tissue-type classifications to compute image features that are both biologically relevant and human interpretable. We demonstrate that computed HIFs can recapitulate sequencing-based cell quantifications, capture canonical immune signatures such as leukocyte infiltration and TGF-β expression, and robustly predict five molecular phenotypes relevant to the efficacy of targeted cancer therapies. We also demonstrate the generalizability of our associations, as evidenced by similarly predictive HIF clusters across biopsy images derived from five different cancer types. Notably, we show that our HIF-based approach, which integrates the predictive power of deep learning with the interpretability of feature engineering, achieves comparable performance to that of black-box models. While prior studies have applied deep learning methodologies to capture cell-level information, such as the spatial configuration of immune and stromal cells , , or tissue-level information alone, our combined cell and tissue approach enables quantification of increasingly complex and expressive features of the TME, ranging from the mean cluster size of fibroblasts in CAS to the proximity of TILs or cancer-associated fibroblasts to the CSI. For instance, while TILs are emerging as a promising biomarker in solid tumors such as triple-negative and HER2-positive breast cancer , TILs differ from stromal lymphocytes, and substantial signal can be obtained by considering multiple cell–tissue combinations . By training models to make six-class cell-type and four-class tissue-type classifications from >1.6 million pathologist annotations, our approach is also able to capture more interactions between cell types and tissue regions than prior HIF-based studies – . Our approach exhaustively generates cell- and tissue-type predictions across entire WSIs at subcellular resolution (2 and 4 μm, respectively) and improves upon previous tiling approaches that downsample the image. The tissue visible in a WSI is already only a fraction of the tumor; using the entire slide reduces the probability of fixating on local effects and enables quantification of complex characteristics that span multiple tissue regions (e.g., multiplicity, solidity, and fractal dimension of necrotic regions). In addition, our approach of systematically quantifying specific and interpretable features of the tumor and its surroundings can enable hypothesis generation and a deeper understanding of the TME’s influence on drug response. Recent studies provide evidence that the tumor immune architecture may influence the clinical efficacy of immune checkpoint inhibitor and poly (ADP-ribose) polymerase inhibitor therapies . Lastly, during both model development and evaluation, we sought to emphasize robustness to real-world variability . In particular, we supplemented TCGA WSIs with additional diverse datasets during CNN training, integrated pathologist feedback into model iterations, and evaluated HIF-based model performance on hold-out sets composed exclusively of samples from unseen tissue source sites, improving upon prior approaches to predicting molecular outcomes from TCGA H&E images , . Our study data from TCGA carries several limitations. First, biopsy images submitted to the TCGA dataset are biased toward primary tumors and tumors with more definitive diagnoses that may not generalize well to ordinary clinical settings. Indeed, associations identified in primary tumors may not necessarily generalize to metastatic settings (Supplementary Fig. ). Second, TCGA is limited to images of H&E staining, which limits the breadth of information available to models. Integrating multimodal data containing stains against Ki-67 or immunohistological targets may increase confidence in cell classifications . Third, batch effects in TCGA can originate from differing tissue collection, sectioning, and processing procedures. Our validation procedure of partitioning by tissue source site does not account for all possible data artifacts, but it does control for confounding by sample collection, extraction, and other site-specific variables. Our HIF-based approach also limits the impact of spurious associations introduced by batch effects by pre-defining features based on biological phenomena. Fourth, TCGA has limited treatment data and clinical endpoint data are less reliable than molecular data. As TCGA samples were made available in 2013 , treatment regimens for these cases also predate the widespread adoption of immune checkpoint inhibitors. As such, our models were restricted to prediction of molecular phenotypes with relevance to drug response, in lieu of more direct clinical endpoints, such as RECIST and overall survival. While molecular phenotypes such as PD-L1 expression are informative for clinical endpoints such as sensitivity to immune checkpoint blockade , the ability to robustly predict biomarkers does not necessarily translate into robust prediction of relevant endpoints. Ultimately, direct prediction of patient outcomes is needed for clinical integration. Our study provides an interpretable framework to generate hypotheses for clinically relevant biomarkers that can be validated in future prospective studies . The curation of public datasets with matched pathology images and high-fidelity treatment information could help bridge the remaining gap. The HIF-based approach also has limitations. First, annotations vary in reliability. Macrophages are particularly difficult for pathologists to identify solely under H&E staining. While the accuracy of an individual pathologist identifying macrophages may be poor, our models represent an aggregate estimate based on training from hundreds of pathologist annotators, which may carry a more reliable signal , . Future development of our approach could extend to multiplex immunofluorescence technologies that measure spatial protein expression. These methods face challenges of increased cost, lower resolution, and lower scalability across WSIs but may improve upon traditional immunohistochemistry staining in predicting drug response to immune checkpoint inhibitors and reduce the need for expert annotation of cell types. Second, curation of high-fidelity, large-scale pathologist annotations can be time-consuming and expensive. Improvement of open-source segmentation models could accelerate the adoption of HIF-based models. Third, morphologically similar cells (e.g., macrophages, dendritic cells, endothelial cells, pericytes, myeloid-derived suppressor cells, and atypical lymphocytes) may all be captured under a single cell-type prediction. Thus HIFs may, in reality, capture information about a mixture of cell types. For example, in diffuse forms of STAD in which cancer cells invade smooth muscle tissue, our models misclassified certain smooth muscle cells as fibroblasts. Collecting targeted annotations of morphologically similar cell types may decrease noise in HIF estimates and improve performance. Lastly, HIFs are computed as summary statistics within each tissue type across WSIs. Applying “attention-based” HIF computation to focus on regions of interest and further account for spatial heterogeneity is a potential avenue for further research. Recent work – has revealed the weaknesses of low-interpretability models, including brittleness to population differences, vulnerabilities to technical artifacts, and susceptibility to unforeseen real-world failure modes. Although HIF-based approaches are not immune to such risks, they provide easier debugging and identification of failure modes than end-to-end models. Beyond suggesting interpretable hypotheses for causal mechanisms (e.g., the anti-tumor effect of high lymphocyte density), our HIF-based approach can be continually validated at several points: pathologists can judge the quality of cell- and tissue-type predictions, estimate the values of each relevant feature using traditional manual scoring, and note when variability in sample preparation or quality may significantly affect relevant features. Interpretable sets of HIFs, computed from tens of thousands of deep learning-based cell- and tissue-type predictions per patient, improve upon conventional “black-box” approaches that apply deep learning directly to WSIs, yielding models with millions of parameters and limited interpretability. While gradient-based saliency and class activation maps can identify relevant image regions in end-to-end CNN models – , they only enable subjective generation of hypotheses based on slide-by-slide qualitative assessment and are susceptible to human biases . Other model-agnostic interpretability methods, such as partial dependence plots and feature importance measures, are also unable to objectively and scalably connect pixel intensity features to biological phenomena. By contrast, predictive HIFs are directly mapped onto biological concepts and can be interpreted quantitatively across thousands of images. This allows investigators to directly identify concrete hypotheses and correlations that can be investigated further in causal analyses. Unlike “black-box” models that may opaquely rely on features that are predictive but disconnected from the outcome of interest, such as tissue excision or preparation artifacts (e.g., surgical or pathologist markings) , , HIF-based predictions can be traced to observable features, allowing model failures to be observed, explained, and addressed. Furthermore, HIF-based models enable users to explicitly define the set of features or hypotheses under examination, reducing the risk of spurious correlations and potentially increasing performance for low sample size prediction tasks. While additional comparative studies are needed, improved trust and reliability against unexpected failures would make HIF-based models a valuable alternative to end-to-end models. The ability to predict molecular phenotypes directly from WSIs in an interpretable fashion offers numerous potential benefits for clinical oncology. Hospitals, healthcare institutions, and biotechnology companies have decades of archival histopathology data captured from routine care and clinical trials . With improved accuracy, HIF-based models could leverage this information to enable the discovery of patient subpopulations with specific treatment susceptibilities, biomarkers predictive of drug response, and hypotheses for subsequent research. Dense, high-resolution prediction of cell and tissue types using CNNs In order to compute histopathological image features for each slide, it was necessary to first generate cell and tissue predictions per WSI. To this end, we asked a network of board-certified pathologists to label WSIs with both polygonal region annotations based on tissue type (cancer tissue, CAS, necrotic tissue, and normal tissue or background) and point annotations based on cell type (cancer cells, lymphocytes, macrophages, plasma cells, fibroblasts, and other cells or background). This collection of expert annotations was then used to train six-class cell-type and four-class tissue-type classifiers. Several steps were taken to ensure the accuracy and generalizability of our models. First, it was important to recognize that common cell and tissue types, such as CAS or cancer cells, show morphological differences between BRCA, LUAD, LUSC, SKCM, and STAD. As a result, we trained separate cell- and tissue-type detection models for each of these five cancer types, for a total of ten models. Second, it was important to ensure that our models did not overfit to the histological patterns found in the training set. To avoid this, we followed the conventional protocol of splitting our data into training, validation, and test sets and incorporated additional annotations of the same five cancer types from PathAI’s databases into the model development process. Together, these datasets represented a wide diversity of examples for each class in each cancer type, thus improving the generalizability of these models beyond the TCGA dataset. Using the combined dataset of annotated TCGA and additional WSIs, we trained deep CNNs to output dense pixelwise cell- and tissue-type predictions at a subcellular spatial resolution of 2 and 4 μm, respectively (spatial resolution dictated by stride). To ensure that our models achieved sufficient accuracy for feature extraction, models were trained in an iterative process, with each updated model’s predictions visualized as heatmaps to be reviewed by board-certified pathologists. In heatmap visualizations, tissue categories were segmented into colored regions, while cell types were identified as colored squares. This process continued until there were minimal systematic errors and the pathologists deemed the model sufficiently trustworthy for feature extraction. All WSIs used in this study were FFPE slides. This means that tissue samples used for RNA-Seq and histology imaging were extracted from different portions of the patient’s tumor and may thus vary in their TME. Pathologist-in-the-loop CNN model training During the CNN training process, we worked iteratively with three board-certified pathologists to conduct subjective evaluation of model predictions to inform multiple rounds of training. CNN models were initially trained on a set of primary annotations collected from the pathologist network. Following the conclusion of each training round (defined by model convergence), predicted cell and tissue heatmaps were reviewed for systematic errors (e.g., overprediction of fibroblasts, macrophages, and plasma cells, underprediction of necrotic tissue). New (secondary) annotations would then be collected from the pathologist network focusing on areas of improvement (e.g., mislabeled macrophages) to initiate a subsequent training round. The final cell- and tissue-type models were selected based on a consensus across the three pathologists. To reduce the risk of overfitting, CNN models were frozen after selection and unperturbed during molecular phenotype prediction using classical machine learning models. We computed validation metrics for cell- and tissue-type models on pooled primary and secondary annotations and visualized these metrics as confusion matrices. Pathologist validation of cell-type models To directly compare our cell-type predictions on TCGA WSIs against pathologist annotations, we generated 250 75 × 75 μm frames of cell-type overlays evenly sampled across the five cancer types and five cell types, each from a distinct WSI. The generation process sought to sample frames with both high and low densities of a given cell-type according to our cell-type model predictions. Each frame was annotated for each of the five cell types by five board-certified pathologists. This allows us to compare the count of lymphocytes, plasma cells, fibroblasts, macrophages, and cancer cells in each 75 × 75 μm frame predicted by our CNN cell-type models against a consensus of pathologist annotation counts. We computed the Pearson correlation between our cell-type model counts and pathologist consensus counts across the 250 frames for all five cell types. Pathologist consensus counts were computed as the median of the five individual pathologist counts for a given frame and cell type. To capture inter-pathologist variability, we also computed the leave-one-out Pearson correlation between each individual pathologist’s annotation counts and the consensus (median) among the remaining four pathologists. We then obtained a point estimate and 95% confidence interval for the average performance of an annotator with respect to the leave-one-out consensus. To assess model generalizability, we redeployed our BRCA cell-type model trained primarily on TCGA to exhaustively predict cell types on 72 H&E, FFPE WSIs from an external BRCA dataset uploaded by Peikari et al. to TCIA . We then used the same analysis framework and metrics as above to assess concordance between our cell-type model and pathologist consensus across 250 75 × 75 μm frames (evenly sampled across the five cell types) generated from these external WSIs (Supplementary Fig. ). Tissue-based feature extraction Using the tissue-type predictions, we extracted 163 different region-based features from each WSI in the TCGA dataset. Each of these features belonged to one of three general categories. The first category consisted of areas ( n = 13 HIFs). By simple pixel summation, we computed the total areas (in mm 2 ) of cancer tissue, CAS, cancer tissue plus CAS, regions at the CSI, and necrosis in each slide. These features are interpretable and technically attainable by human pathologists but would be prohibitively time-consuming and inconsistent across pathologists to calculate in practice. The second category, which contributed the bulk of the features, made use of the publicly available scikit-image.measure.regionprops module to find the connected components of each of these tissue types at the pixel-level using eight-connectivity. Once these connected components were found, we used both library-provided and self-implemented methods to extract a series of morphological features ( n = 125 HIFs), similar to the approach suggested by Wang et al. in 2018 . These HIFs measured a wide variety of tissue characteristics, ranging from quantitative, size-based measures like the number of connected components, major and minor axis lengths, convex areas, and filled areas, to more qualitative, shape-based measures like Euler numbers, lacunarity, and eccentricity. Recognizing the log-distribution of connected component size, we computed these features not just across all connected components but also for both the largest connected component only and across the most “significant” connected components, defined as components >10% the size of the largest connected component. In aggregating metrics across considered components, we incorporated both averages and standard deviations of HIFs (e.g., standard deviation of eccentricities of significant regions of necrosis) to capture both summary metrics and metrics of intratumor heterogeneity. The third category of features captures tissue architecture ( n = 25 HIFs). Inspired by Lennon et al. , we calculated the fractal dimensions and solidity measures of different tissue types, capturing both the roundness and filled-ness of the tissue, under the hypothesis that the ability for these measures to separate different subtypes of lung cancer might translate to a similar ability to predict clinically relevant phenotypes. These features allowed us to capture information about how tissue filled up space, rather than just the summative sizes and shapes captured by the first and second categories. Cell- and tissue-based feature extraction After obtaining six-class cell-type predictions for each pixel of a WSI, we generated five binary masks corresponding to each of the five specified cell types. We then combined cell- and tissue-level masks to compute properties of each cell type in each tissue type (e.g., fibroblasts in CAS), extracting 444 HIFs. An initial group of features that were readily calculable from our model predictions included simple counts and densities of cell types in different tissue types. For example, an overlay of a particular slide’s lymphocyte detection mask on top of the same slide’s CAS mask could be used to calculate the number of TILs on a given slide. We could then divide this number by the area of CAS to find the associated density of TILs on the slide. By taking the “outer product” of cell and tissue types, we derived a wide array of composite features. In particular, we calculated counts, proportions, and densities of cells across different tissue types (e.g., density of macrophages in CAS versus in cancer tissue), under the hypothesis that these measures capture information that raw counts could not. To capture information regarding cell–cell proximity and interactions, we also calculated counts and proportions of each cell type within an 80-μm radius of each other cell type (e.g., count of lymphocytes within an 80-μm radius of fibroblasts). Cell-level counts, densities, and proportions comprised 264 HIFs. For each cell–tissue combination, we next applied the Birch clustering method (as implemented in the sklearn.cluster Python module) to partition cells into clusters . To fit clustering structures as closely as possible to the spatial relationships found between cell types on the slide, we set a threshold of 100, a branching factor of 10, and allowed the algorithm to optimize the number of clusters returned. We used the returned clusters to calculate a series of features designed to capture spatial relationships between individual cells types within a given tissue type, including number of clusters, cluster size mean and standard deviation (SD), within-cluster dispersion mean and SD, cluster extent mean and SD, the Ball–Hall Index, and Calinski–Harabasz Index ( n = 180 HIFs). For metrics where cluster exemplars were needed, the subcluster centers returned by the Birch algorithm were used. Patient-level aggregation Patients with multiple tissue samples were represented by the single sample with the largest area of cancer tissue plus CAS, computed during tissue-based feature extraction. All subsequent analyses were conducted at the patient level. HIF clustering Due to underlying biological relationships as well as the HIF generation process, there is significant correlation structure between many of the features. This presents a challenge of feature selection as much of the information contained in one feature will also be present in another. It also makes it difficult to control for multiple hypothesis testing, because the underlying number of tested hypotheses is significantly fewer than the number of features computed. To identify groups of correlated HIFs, we clustered features via hierarchical agglomerative clustering using complete linkage, a cluster cutoff of 0.95, and pairwise correlation distance (1 − absolute Spearman correlation) as the distance metric. We defined a set of HIF clusters for each cancer type independently, as well as another set for pan-cancer analyses (Supplementary Data ). Clustering correlated features allows us to summarize the true underlying number of tested hypotheses. Visualization of cancer types in HIF space UMAP was applied for dimensionality reduction and visualization of patient samples from the 607-dimension HIF space into two dimensions (using parameters: number of neighbors = 15, training epochs = 500, distance metric = Euclidean). The V-Measure was computed to compare BRCA, STAD, SKCM, and NSCLC (LUAD and LUSC combined) classes against clusters generated by k -means ( k = 4) applied to the 2-D UMAP projection , . To quantify differences between cancer types, HIF values were normalized pan-cancer into Z -scores. Median Z -scores were then computed per cancer type across 20 HIFs, each representing 1 of the 20 HIF clusters defined pan-cancer. Representative HIFs were selected based on subjective interpretability and high variance across cancer types. To determine the statistical significance of median Z -scores that were greater in one cancer type relative to others, P values were estimated with the one-sided Mann–Whitney U test, considering NSCLC subtypes LUAD and LUSC as one type. Validation of HIFs against molecular signatures To validate the ability of HIFs to capture meaningful cell- and tissue-level information, we computed Spearman correlations between HIFs and four canonical immune signatures from the PanImmune dataset : (1) leukocyte infiltration, (2) IgG expression, (3) TGF-β expression, and (4) wound healing. We also assessed HIF correlation to (5) angiogenesis signature, also derived from PanImmune, and (6) hypoxia score, derived from Buffa et al. , . All six molecular signatures were quantified by mapping mRNA sequencing reads against gene sets associated with the aforementioned known immune and gene expression signatures. To estimate the correlation between HIF clusters and immune signatures, we computed the median absolute Spearman correlation per cluster and combined dependent P values associated with individual correlations via the Empirical Brown’s method . To control the false discovery rate, combined P values per cluster were then corrected using the Benjamini–Hochberg procedure . Correlation analyses were conducted for cancer types collectively and individually, using HIF clusters defined across all cancer types for assessment of concordance. In addition, image-based cell quantifications for leukocyte fraction, lymphocyte fraction, and plasma cell fraction were validated by Spearman correlation to their sequencing-based equivalents from matched TCGA tumor samples, computed using CIBERSORT (cell-type identification by estimating relative subsets of RNA transcripts) . CIBERSORT uses an immune signature matrix for deconvolution of observed RNA-Seq read counts into estimates of relative contributions between 22 immune cell profiles . Molecular phenotype label curation To reduce bias and protect against overfitting, the molecular phenotypes assessed in this study were selected after the cell- and tissue-type models were frozen. PD-1, PD-L1, and CTLA-4 expression data for each cancer type were collected from the PanImmune dataset , while TIGIT expression data were collected from the National Cancer Institute Genomic Data Commons . PD-1, PD-L1, CTLA-4, and TIGIT expression levels were quantified from mapped mRNA reads against genes PDCD1, CD274, CTLA-4, and TIGIT, respectively, and normalized as Z -scores across all cancer types in TCGA. HRD scores were collected from Knijnenburg et al. . The HRD score was calculated as the sum of three components: (1) number of subchromosomal regions with allelic imbalance extending to the telomere, (2) number of chromosomal breaks between adjacent regions of least 10 Mb (mega base pairs), and (3) number of loss of heterozygosity regions of intermediate size (at least 15 Mb but less than whole chromosome length). Continuous immune checkpoint protein expression and HRD scores were binarized to high versus low classes using Gaussian mixture model (GMM) clustering with unequal variance (Supplementary Fig. ). The binary threshold was defined as the intersection of the empirical densities between the two GMM-defined clusters. To evaluate the extent to which prediction tasks were correlated, Pearson correlation and percentage agreement metrics were computed pan-cancer ( n = 1893 patients) between the five molecular phenotypes in continuous and binarized form, respectively (Supplementary Fig. ). Hold-out set definition by TCGA tissue source site TCGA provides tissue source site information, which denotes the medical institution or company that provided the patient sample. For each prediction task (described below), a hold-out set was defined as approximately 20–30% of patient samples obtained from sites not seen in the training set (Supplementary Table ). This validation methodology enables us to demonstrate model generalizability across varying patient demographics and tissue collection processes intrinsic to different tissue source sites. Patient barcodes corresponding to hold-out and training sets are provided in Supplementary Data . Supervised prediction of molecular phenotypes We conducted supervised prediction of binarized high versus low expression of five clinically relevant phenotypes: (1) PD-1 expression, (2) PD-L1 expression, (3) CTLA-4 expression, (4) HRD score, and (5) TIGIT expression. Predictions were conducted pan-cancer as well as for cancer types individually. SKCM was excluded from prediction tasks 1 to 4 due to insufficient outcome labels (number of observations <100 for tasks 1–3; number of positive labels <10 for task 4). For each of the 26 prediction tasks, we trained a logistic sparse group lasso (SGL) model tuned by nested cross-validation (CV) with three outer folds and five inner folds using the corresponding training set. SGL provides regularization at both an individual covariate (as in traditional lasso) and user-defined group level, thus encouraging group-wise and within-group sparsity. The HIF clusters defined per cancer type and pan-cancer (previously described) were inputted as groups. HIFs were normalized to mean = 0 and SD = 1. In accordance with nested CV, hyper-parameter tuning was conducted using the inner loops and mean generalization error and variance were estimated from the outer loops. The three tuned models, each trained on two of the three outer folds and evaluated on the third outer fold, were ensembled by averaging predicted probabilities for final evaluation (reported in Fig. and Supplementary Table ) on the hold-out set. Hold-out performance was evaluated by AUROC and AUPRC. To identify predictive features, beta values from the three outer fold models were averaged to obtain ensemble beta values per HIF (see Fig. caption for more details). End-to-end model benchmarking To compare our HIF-based approach against conventional end-to-end models, we trained 26 distinct CNNs for each of the 26 molecular phenotype prediction tasks described above using single-instance learning. We used the computationally efficient ShuffleNet architecture and the same hyper-parameters described in Kather et al. (batch size of 512, patch size of 512 × 512 pixels at 2 μm per pixel, 30 unfrozen layers, learning rate of 5 × 10 −5 ) without additional tuning. The same training and hold-out sets from HIF-based model development were used to ensure that AUROC metrics were comparable. Random forest model comparison Additionally, we compared the performance (AUROC and AUPRC) of HIF-based linear models against HIF-based random forest models. Hyperparameters were all set to defaults for all 26 molecular phenotype prediction tasks: number of trees = 500, number of variables randomly sampled as candidates at each split = 25 (square root of the number of features—607), minimum size of terminal nodes = 1. Random forest models account for interaction effects and can thus test the hypothesis that capturing interactions between the 607 HIFs can improve model performance . Once again, we maintained the same training and hold-out sets used during HIF-based linear model development. Statistical analysis To compute 95% confidence intervals for each prediction task, we generated empirical distributions of AUROC and AUPRC metrics each consisting of 1000 bootstrapped metrics, as recommended by multiple sources . Bootstrapped metrics were obtained by sampling with replacement from matched model predictions (probabilities) and true labels for the corresponding hold-out set and re-computing AUROC and AUPRC on these two bootstrapped vectors. P values for AUROC and AUPRC hold-out metrics were denoted as the probability either metric was <0.5 under the aforementioned empirical distributions and multiple-hypothesis-corrected across the 26 prediction tasks using the Benjamini–Hochberg procedure . P values for ensemble beta values of predictive HIFs were computed using a permutation test with 1000 iterations. During each iteration, labels in the training set were permuted and the previously described training process of nested CV and ensembling was re-applied to generate a new set of ensemble beta values per HIF. P values for individual HIFs were then obtained by comparing beta values in the original ensemble model against the corresponding null distribution of ensemble beta values. Individual HIF P values were combined into cluster-level P values via the Empirical Brown’s method and corrected using the Benjamini–Hochberg procedure . Data analyses in this study used the programming languages Python version 3.7.4 and R version 3.6.2. Analysis code has been uploaded to public repositories . Reporting summary Further information on research design is available in the linked to this article. In order to compute histopathological image features for each slide, it was necessary to first generate cell and tissue predictions per WSI. To this end, we asked a network of board-certified pathologists to label WSIs with both polygonal region annotations based on tissue type (cancer tissue, CAS, necrotic tissue, and normal tissue or background) and point annotations based on cell type (cancer cells, lymphocytes, macrophages, plasma cells, fibroblasts, and other cells or background). This collection of expert annotations was then used to train six-class cell-type and four-class tissue-type classifiers. Several steps were taken to ensure the accuracy and generalizability of our models. First, it was important to recognize that common cell and tissue types, such as CAS or cancer cells, show morphological differences between BRCA, LUAD, LUSC, SKCM, and STAD. As a result, we trained separate cell- and tissue-type detection models for each of these five cancer types, for a total of ten models. Second, it was important to ensure that our models did not overfit to the histological patterns found in the training set. To avoid this, we followed the conventional protocol of splitting our data into training, validation, and test sets and incorporated additional annotations of the same five cancer types from PathAI’s databases into the model development process. Together, these datasets represented a wide diversity of examples for each class in each cancer type, thus improving the generalizability of these models beyond the TCGA dataset. Using the combined dataset of annotated TCGA and additional WSIs, we trained deep CNNs to output dense pixelwise cell- and tissue-type predictions at a subcellular spatial resolution of 2 and 4 μm, respectively (spatial resolution dictated by stride). To ensure that our models achieved sufficient accuracy for feature extraction, models were trained in an iterative process, with each updated model’s predictions visualized as heatmaps to be reviewed by board-certified pathologists. In heatmap visualizations, tissue categories were segmented into colored regions, while cell types were identified as colored squares. This process continued until there were minimal systematic errors and the pathologists deemed the model sufficiently trustworthy for feature extraction. All WSIs used in this study were FFPE slides. This means that tissue samples used for RNA-Seq and histology imaging were extracted from different portions of the patient’s tumor and may thus vary in their TME. During the CNN training process, we worked iteratively with three board-certified pathologists to conduct subjective evaluation of model predictions to inform multiple rounds of training. CNN models were initially trained on a set of primary annotations collected from the pathologist network. Following the conclusion of each training round (defined by model convergence), predicted cell and tissue heatmaps were reviewed for systematic errors (e.g., overprediction of fibroblasts, macrophages, and plasma cells, underprediction of necrotic tissue). New (secondary) annotations would then be collected from the pathologist network focusing on areas of improvement (e.g., mislabeled macrophages) to initiate a subsequent training round. The final cell- and tissue-type models were selected based on a consensus across the three pathologists. To reduce the risk of overfitting, CNN models were frozen after selection and unperturbed during molecular phenotype prediction using classical machine learning models. We computed validation metrics for cell- and tissue-type models on pooled primary and secondary annotations and visualized these metrics as confusion matrices. To directly compare our cell-type predictions on TCGA WSIs against pathologist annotations, we generated 250 75 × 75 μm frames of cell-type overlays evenly sampled across the five cancer types and five cell types, each from a distinct WSI. The generation process sought to sample frames with both high and low densities of a given cell-type according to our cell-type model predictions. Each frame was annotated for each of the five cell types by five board-certified pathologists. This allows us to compare the count of lymphocytes, plasma cells, fibroblasts, macrophages, and cancer cells in each 75 × 75 μm frame predicted by our CNN cell-type models against a consensus of pathologist annotation counts. We computed the Pearson correlation between our cell-type model counts and pathologist consensus counts across the 250 frames for all five cell types. Pathologist consensus counts were computed as the median of the five individual pathologist counts for a given frame and cell type. To capture inter-pathologist variability, we also computed the leave-one-out Pearson correlation between each individual pathologist’s annotation counts and the consensus (median) among the remaining four pathologists. We then obtained a point estimate and 95% confidence interval for the average performance of an annotator with respect to the leave-one-out consensus. To assess model generalizability, we redeployed our BRCA cell-type model trained primarily on TCGA to exhaustively predict cell types on 72 H&E, FFPE WSIs from an external BRCA dataset uploaded by Peikari et al. to TCIA . We then used the same analysis framework and metrics as above to assess concordance between our cell-type model and pathologist consensus across 250 75 × 75 μm frames (evenly sampled across the five cell types) generated from these external WSIs (Supplementary Fig. ). Using the tissue-type predictions, we extracted 163 different region-based features from each WSI in the TCGA dataset. Each of these features belonged to one of three general categories. The first category consisted of areas ( n = 13 HIFs). By simple pixel summation, we computed the total areas (in mm 2 ) of cancer tissue, CAS, cancer tissue plus CAS, regions at the CSI, and necrosis in each slide. These features are interpretable and technically attainable by human pathologists but would be prohibitively time-consuming and inconsistent across pathologists to calculate in practice. The second category, which contributed the bulk of the features, made use of the publicly available scikit-image.measure.regionprops module to find the connected components of each of these tissue types at the pixel-level using eight-connectivity. Once these connected components were found, we used both library-provided and self-implemented methods to extract a series of morphological features ( n = 125 HIFs), similar to the approach suggested by Wang et al. in 2018 . These HIFs measured a wide variety of tissue characteristics, ranging from quantitative, size-based measures like the number of connected components, major and minor axis lengths, convex areas, and filled areas, to more qualitative, shape-based measures like Euler numbers, lacunarity, and eccentricity. Recognizing the log-distribution of connected component size, we computed these features not just across all connected components but also for both the largest connected component only and across the most “significant” connected components, defined as components >10% the size of the largest connected component. In aggregating metrics across considered components, we incorporated both averages and standard deviations of HIFs (e.g., standard deviation of eccentricities of significant regions of necrosis) to capture both summary metrics and metrics of intratumor heterogeneity. The third category of features captures tissue architecture ( n = 25 HIFs). Inspired by Lennon et al. , we calculated the fractal dimensions and solidity measures of different tissue types, capturing both the roundness and filled-ness of the tissue, under the hypothesis that the ability for these measures to separate different subtypes of lung cancer might translate to a similar ability to predict clinically relevant phenotypes. These features allowed us to capture information about how tissue filled up space, rather than just the summative sizes and shapes captured by the first and second categories. After obtaining six-class cell-type predictions for each pixel of a WSI, we generated five binary masks corresponding to each of the five specified cell types. We then combined cell- and tissue-level masks to compute properties of each cell type in each tissue type (e.g., fibroblasts in CAS), extracting 444 HIFs. An initial group of features that were readily calculable from our model predictions included simple counts and densities of cell types in different tissue types. For example, an overlay of a particular slide’s lymphocyte detection mask on top of the same slide’s CAS mask could be used to calculate the number of TILs on a given slide. We could then divide this number by the area of CAS to find the associated density of TILs on the slide. By taking the “outer product” of cell and tissue types, we derived a wide array of composite features. In particular, we calculated counts, proportions, and densities of cells across different tissue types (e.g., density of macrophages in CAS versus in cancer tissue), under the hypothesis that these measures capture information that raw counts could not. To capture information regarding cell–cell proximity and interactions, we also calculated counts and proportions of each cell type within an 80-μm radius of each other cell type (e.g., count of lymphocytes within an 80-μm radius of fibroblasts). Cell-level counts, densities, and proportions comprised 264 HIFs. For each cell–tissue combination, we next applied the Birch clustering method (as implemented in the sklearn.cluster Python module) to partition cells into clusters . To fit clustering structures as closely as possible to the spatial relationships found between cell types on the slide, we set a threshold of 100, a branching factor of 10, and allowed the algorithm to optimize the number of clusters returned. We used the returned clusters to calculate a series of features designed to capture spatial relationships between individual cells types within a given tissue type, including number of clusters, cluster size mean and standard deviation (SD), within-cluster dispersion mean and SD, cluster extent mean and SD, the Ball–Hall Index, and Calinski–Harabasz Index ( n = 180 HIFs). For metrics where cluster exemplars were needed, the subcluster centers returned by the Birch algorithm were used. Patients with multiple tissue samples were represented by the single sample with the largest area of cancer tissue plus CAS, computed during tissue-based feature extraction. All subsequent analyses were conducted at the patient level. Due to underlying biological relationships as well as the HIF generation process, there is significant correlation structure between many of the features. This presents a challenge of feature selection as much of the information contained in one feature will also be present in another. It also makes it difficult to control for multiple hypothesis testing, because the underlying number of tested hypotheses is significantly fewer than the number of features computed. To identify groups of correlated HIFs, we clustered features via hierarchical agglomerative clustering using complete linkage, a cluster cutoff of 0.95, and pairwise correlation distance (1 − absolute Spearman correlation) as the distance metric. We defined a set of HIF clusters for each cancer type independently, as well as another set for pan-cancer analyses (Supplementary Data ). Clustering correlated features allows us to summarize the true underlying number of tested hypotheses. UMAP was applied for dimensionality reduction and visualization of patient samples from the 607-dimension HIF space into two dimensions (using parameters: number of neighbors = 15, training epochs = 500, distance metric = Euclidean). The V-Measure was computed to compare BRCA, STAD, SKCM, and NSCLC (LUAD and LUSC combined) classes against clusters generated by k -means ( k = 4) applied to the 2-D UMAP projection , . To quantify differences between cancer types, HIF values were normalized pan-cancer into Z -scores. Median Z -scores were then computed per cancer type across 20 HIFs, each representing 1 of the 20 HIF clusters defined pan-cancer. Representative HIFs were selected based on subjective interpretability and high variance across cancer types. To determine the statistical significance of median Z -scores that were greater in one cancer type relative to others, P values were estimated with the one-sided Mann–Whitney U test, considering NSCLC subtypes LUAD and LUSC as one type. To validate the ability of HIFs to capture meaningful cell- and tissue-level information, we computed Spearman correlations between HIFs and four canonical immune signatures from the PanImmune dataset : (1) leukocyte infiltration, (2) IgG expression, (3) TGF-β expression, and (4) wound healing. We also assessed HIF correlation to (5) angiogenesis signature, also derived from PanImmune, and (6) hypoxia score, derived from Buffa et al. , . All six molecular signatures were quantified by mapping mRNA sequencing reads against gene sets associated with the aforementioned known immune and gene expression signatures. To estimate the correlation between HIF clusters and immune signatures, we computed the median absolute Spearman correlation per cluster and combined dependent P values associated with individual correlations via the Empirical Brown’s method . To control the false discovery rate, combined P values per cluster were then corrected using the Benjamini–Hochberg procedure . Correlation analyses were conducted for cancer types collectively and individually, using HIF clusters defined across all cancer types for assessment of concordance. In addition, image-based cell quantifications for leukocyte fraction, lymphocyte fraction, and plasma cell fraction were validated by Spearman correlation to their sequencing-based equivalents from matched TCGA tumor samples, computed using CIBERSORT (cell-type identification by estimating relative subsets of RNA transcripts) . CIBERSORT uses an immune signature matrix for deconvolution of observed RNA-Seq read counts into estimates of relative contributions between 22 immune cell profiles . To reduce bias and protect against overfitting, the molecular phenotypes assessed in this study were selected after the cell- and tissue-type models were frozen. PD-1, PD-L1, and CTLA-4 expression data for each cancer type were collected from the PanImmune dataset , while TIGIT expression data were collected from the National Cancer Institute Genomic Data Commons . PD-1, PD-L1, CTLA-4, and TIGIT expression levels were quantified from mapped mRNA reads against genes PDCD1, CD274, CTLA-4, and TIGIT, respectively, and normalized as Z -scores across all cancer types in TCGA. HRD scores were collected from Knijnenburg et al. . The HRD score was calculated as the sum of three components: (1) number of subchromosomal regions with allelic imbalance extending to the telomere, (2) number of chromosomal breaks between adjacent regions of least 10 Mb (mega base pairs), and (3) number of loss of heterozygosity regions of intermediate size (at least 15 Mb but less than whole chromosome length). Continuous immune checkpoint protein expression and HRD scores were binarized to high versus low classes using Gaussian mixture model (GMM) clustering with unequal variance (Supplementary Fig. ). The binary threshold was defined as the intersection of the empirical densities between the two GMM-defined clusters. To evaluate the extent to which prediction tasks were correlated, Pearson correlation and percentage agreement metrics were computed pan-cancer ( n = 1893 patients) between the five molecular phenotypes in continuous and binarized form, respectively (Supplementary Fig. ). TCGA provides tissue source site information, which denotes the medical institution or company that provided the patient sample. For each prediction task (described below), a hold-out set was defined as approximately 20–30% of patient samples obtained from sites not seen in the training set (Supplementary Table ). This validation methodology enables us to demonstrate model generalizability across varying patient demographics and tissue collection processes intrinsic to different tissue source sites. Patient barcodes corresponding to hold-out and training sets are provided in Supplementary Data . We conducted supervised prediction of binarized high versus low expression of five clinically relevant phenotypes: (1) PD-1 expression, (2) PD-L1 expression, (3) CTLA-4 expression, (4) HRD score, and (5) TIGIT expression. Predictions were conducted pan-cancer as well as for cancer types individually. SKCM was excluded from prediction tasks 1 to 4 due to insufficient outcome labels (number of observations <100 for tasks 1–3; number of positive labels <10 for task 4). For each of the 26 prediction tasks, we trained a logistic sparse group lasso (SGL) model tuned by nested cross-validation (CV) with three outer folds and five inner folds using the corresponding training set. SGL provides regularization at both an individual covariate (as in traditional lasso) and user-defined group level, thus encouraging group-wise and within-group sparsity. The HIF clusters defined per cancer type and pan-cancer (previously described) were inputted as groups. HIFs were normalized to mean = 0 and SD = 1. In accordance with nested CV, hyper-parameter tuning was conducted using the inner loops and mean generalization error and variance were estimated from the outer loops. The three tuned models, each trained on two of the three outer folds and evaluated on the third outer fold, were ensembled by averaging predicted probabilities for final evaluation (reported in Fig. and Supplementary Table ) on the hold-out set. Hold-out performance was evaluated by AUROC and AUPRC. To identify predictive features, beta values from the three outer fold models were averaged to obtain ensemble beta values per HIF (see Fig. caption for more details). To compare our HIF-based approach against conventional end-to-end models, we trained 26 distinct CNNs for each of the 26 molecular phenotype prediction tasks described above using single-instance learning. We used the computationally efficient ShuffleNet architecture and the same hyper-parameters described in Kather et al. (batch size of 512, patch size of 512 × 512 pixels at 2 μm per pixel, 30 unfrozen layers, learning rate of 5 × 10 −5 ) without additional tuning. The same training and hold-out sets from HIF-based model development were used to ensure that AUROC metrics were comparable. Additionally, we compared the performance (AUROC and AUPRC) of HIF-based linear models against HIF-based random forest models. Hyperparameters were all set to defaults for all 26 molecular phenotype prediction tasks: number of trees = 500, number of variables randomly sampled as candidates at each split = 25 (square root of the number of features—607), minimum size of terminal nodes = 1. Random forest models account for interaction effects and can thus test the hypothesis that capturing interactions between the 607 HIFs can improve model performance . Once again, we maintained the same training and hold-out sets used during HIF-based linear model development. To compute 95% confidence intervals for each prediction task, we generated empirical distributions of AUROC and AUPRC metrics each consisting of 1000 bootstrapped metrics, as recommended by multiple sources . Bootstrapped metrics were obtained by sampling with replacement from matched model predictions (probabilities) and true labels for the corresponding hold-out set and re-computing AUROC and AUPRC on these two bootstrapped vectors. P values for AUROC and AUPRC hold-out metrics were denoted as the probability either metric was <0.5 under the aforementioned empirical distributions and multiple-hypothesis-corrected across the 26 prediction tasks using the Benjamini–Hochberg procedure . P values for ensemble beta values of predictive HIFs were computed using a permutation test with 1000 iterations. During each iteration, labels in the training set were permuted and the previously described training process of nested CV and ensembling was re-applied to generate a new set of ensemble beta values per HIF. P values for individual HIFs were then obtained by comparing beta values in the original ensemble model against the corresponding null distribution of ensemble beta values. Individual HIF P values were combined into cluster-level P values via the Empirical Brown’s method and corrected using the Benjamini–Hochberg procedure . Data analyses in this study used the programming languages Python version 3.7.4 and R version 3.6.2. Analysis code has been uploaded to public repositories . Further information on research design is available in the linked to this article. Description of Additional Supplementary Files Reporting Summary Supplementary Information Supplementary Data 1 Peer Review File |
Analyzing microglial phenotypes across neuropathologies: a practical guide | 58b12c1a-5a52-4c15-9bcf-6eff72b79f87 | 8498770 | Pathology[mh] | More than 100 years ago, a dispute arose among two famous Spanish neuroscientists. Santiago Ramón y Cajal had predicted a third element (respectively, cell type) besides neurons and astrocytes . Over time, it became evident that the third element comprised microglia and oligodendroglia. Since his tutor’s hypothesis lacked further specifications concerning features, morphology and functions of the third element, it was Pío del Río Hortega who first characterized microglia . The controversy about the origin of the newly discovered cell type could not be settled at that time. For many decades after the dispute, little attention was paid to microglia cells. It is therefore not surprising that it took almost a century to finally clarify the origin of microglia cells. During primitive hematopoiesis in mice, c-kit + stem cells in the extraembryonic yolk sac mature into CD45 + c-kit − Cx3cr1 + macrophages . Those cells invade the developing brain via the primitive bloodstream and give rise to definitive microglia . Similar to circulating myeloid cells and tissue macrophages in the periphery, microglia have dual functions during non-inflammatory and inflammatory conditions . In a non-inflammatory, healthy central nervous system (CNS), microglia were formerly believed to be quiescent and resting. On the contrary, microglial have motile processes and protrusions. They are constantly moving and thereby actively surveying their environment. During postnatal development, microglia are involved in axonal guidance and synaptic pruning, the physiological process in which abundant synapses are eliminated . Therefore, the neuronal circuit maturation heavily relies on microglial involvement. Another function of microglia is to phagocyte any apoptotic debris, reducing proinflammatory cytokine secretion and minimizing tissue injury . As a result, their exaggerated activation may also cause damage to the CNS . Given that microglia can be both beneficial and detrimental to brain homeostasis and brain health, it has been shown that microglia are involved in the pathogenesis of a variety of diseases such as multiple sclerosis (MS) , Alzheimer's disease , Parkinson's disease , autism spectrum disorder (ASD) and even COVID-19 encephalitis recently . Therefore, studying the underlying mechanisms of microglial activity potentially helps to better understand many neurological and neuropsychiatric disorders and may provide novel therapeutic approaches. In addition, studying the microglial phenotype can also be interesting for researchers outside of the microglial community. As guardians of the brain, the cells rapidly react to any changes in brain homeostasis. In the event of brain pathologies, the microglial phenotype is certainly altered. Therefore, analyzing microglia can be a sensitive tool to check for CNS involvement in any given patient specimen or mouse model. Within the past years, the microglia field and its technical possibilities have been evolving enormously. In this overview, we will summarize these developments and provide an easy point-by-point guide for assessing different microglial phenotypes. Microglia have previously been studied in mammals including humans and rodents, but also in amphibians, reptiles, birds and annelids . In all animals, microglia present themselves as CNS-resident cells with a ramified shape. In some species, microglia have more protrusions or a higher volume. Moreover, differences in microglia density are observed. Further examination of the cell transcriptome revealed a common microglial core gene expression pattern that is conserved across evolution. This core signature includes genes involved in microglia development (e.g. Csf1r, Spi1 , Irf8 ), lysosomal markers (e.g. Ctsb ) or genes that have previously been identified as “microglia-specific” since they are barely expressed in other (infiltrating) immune cells (e.g. Tmem119 , CD81 , Sall1 , Hexb and P2ry12 ). In addition, some genes such as Msr1 are predominantly expressed in primates. In contrast, the expression in rodents including mice and rats is extremely low. Msr1 encodes for macrophage scavenger receptor 1 that is involved in amyloid beta (Aβ) processing. The gene is thought to be a risk locus for Alzheimer’s disease, as shown in genome-wide association studies. Microglia in the brains of rodents which are housed under specific-pathogen-free laboratory conditions show a stable homeostatic phenotype. In contrast, some microglial activation markers may be observed in the normal white matter of patients, devoid of any known neurological disease . As an example, the major molecules involved in oxygen radical production, such as those of the NOX2 complex are already expressed in the normal brain and highly up-regulated in inflammatory or neurodegenerative diseases , while in rodents their expression is restricted to a very small subset of microglia even under inflammatory conditions . Another major difference is the profound iron loading of microglia in the aging human brain , which is associated with microglia senescence and is virtually absent in the rodent brain. These examples highlight the limitations of animal models when examining microglia in the context of human diseases because they may not completely reflect the human disease pathogenesis. Physicians have a great interest in visualizing the activation states of microglia non-invasively in the human brain. In particular, positron emission tomography using microglia-specific radiotracers could help to clarify uncertain diagnoses or monitor the course of a disease such as multiple sclerosis. Many chemical compounds targeting e.g. TSPO , P2X7 , CB2 receptor , COX-2 or CSF1R have been developed in the past decades. The radiotracers all face similar problems. One major issue is the binding specificity of the compounds in vivo. On the one hand, polymorphisms in the target gene may determine the binding affinity . On the other hand, unspecific binding potentially reduces the diagnostic power of the analysis. Moreover, the radiotracers’ targets are not necessarily specific for microglia cells. For instance, it has been shown that TSPO is also expressed by other cell types such as astrocytes or glial progenitor cells and thus cannot be seen as a microglia-specific radiotracer target. Moreover, a recent study could show that the findings from animal models do not always translate to the clinical context in humans . Despite the fact that existing radiotracers need further improvement, the approach holds great promise for patients with (suspected) CNS pathologies. Even more, they allow to monitor disease activity longitudinally in chronic diseases, such as multiple sclerosis, and could thus be used as a paraclinical tool to monitor the effect of neuroprotective treatments . The haematoxylin and eosin (H&E) stain is the most widely applied technique for routine histopathological examination of human tissue samples (Figs. , ). Although the cytoplasm of microglia is hardly visible in H&E-stained specimens, microglial nuclei can be identified by their characteristic shape . They are typically dark and relatively small in size. The partially occurring elongated nuclear shape had been termed “bean-shaped”, “cigar-shaped” or “comma-shaped” in the literature . Moreover, the contour of microglial nuclei partially appears irregular . In the beginning of the twentieth century, different silver impregnation methods enabled Santiago Ramón y Cajal and his student Pío del Río Hortega to describe and characterize microglia. Since the late 1950s, electron microscopy (EM) was used to determine the ultrastructure of microglia . In EM, microglia appear rather small in size and may have a bent, “bean-shaped” nuclei . The cytoplasm typically appears sparse and electron-dense and frequently contains polymorphic electron-dense inclusions. Additional labelling, e.g. by gold-coupled antibodies, can help to identify the cells. Later, it has been shown that lectins, such as isolectin B4, that can be obtained from seeds of the African plant Griffonia simplicifolia or the tomato plant Lycopersicon esculentum , can be used for staining microglia cells . Nowadays, silver impregnations and lectins are hardly used anymore. They have almost completely been replaced by a more specific method, namely immunohistochemistry (Supplementary Fig. 1). Immunohistochemical reactions against the ionized calcium-binding adapter molecule 1 (Iba1) are routinely used to get a first impression of the microglia in a tissue sample. The cytoplasmic immunoreactivity nicely visualizes the cell shape and all processes. As described below, the morphology of the cell can provide insights about the activation status. Immunohistochemical reactions against human leukocyte antigen DR (HLA-DR) are commonly used in addition. Immunohistochemistry against macrosialin (CD68) reveals the degree of lysosomal activity. Since virtually all CNS pathologies involve microglia activation, the microglial phenotype alone can reveal whether a tissue sample must be considered pathological or not. Specifically, a tissue specimen with an entirely normal, homeostatic microglial phenotype excludes pathological CNS processes with almost complete certainty. Conversely, microglial alterations indicate CNS pathologies within the examined sample or even in the in situ CNS neighborhood. While this is obvious for young rodents housed under specific-pathogen-free conditions, the interpretation of microglial activation in the human brain can be more challenging. Non-neurological comorbidities and ongoing systemic therapies need to be taken into consideration. Using light microscopy, chromogenic DAB-based immunohistochemistry against Iba1 is a widely established method to study the morphological characteristics of microglial cells in both murine and human tissue specimens. We recommend the following standardized approach to assess microglial changes in CNS during homeostasis and perturbance (Fig. , Supplementary Fig. 2). Step 1: Identification of myeloid cells within the CNS Iba1 is a reliable cytoplasmic microglial marker with a strong signal, labelling both cell bodies and processes (Fig. , Step 1, Supplementary Fig. 2a). Nevertheless, there is a major pitfall in using this marker, since Iba1 does not exclusively label microglia cells. It also marks other cell types such as perivascular or meningeal macrophages or even infiltrating myeloid cells such as monocytes. Due to their location within the meninges, meningeal macrophages can easily be identified. Distinguishing perivascular macrophages from microglia can be more challenging. Perivascular macrophages typically present with an elongated shape and unlike microglia without many processes. By nature, identifying vessels helps to find perivascular macrophages. With haematoxylin as counterstaining, vessels may show a lumen. Only if the vessel is cut transversally, a roundish shape might be visible. Longitudinally cut vessels show series of elongated nuclei and potentially an elongated lumen. In cortical biopsies, the course of the vessels is typically perpendicular to the cortical tissue surface, which can help to identify vessels and subsequently perivascular macrophages. After excluding meningeal and perivascular macrophages, the remaining parenchymal Iba1-labelled cells with processes are microglia. Notably, infiltrating monocytes would also be Iba1-positive. Although the cell shape may help to distinguish them from microglia to some extent, infiltrated monocytes may become ramified in the brain extracellular space . Step 2: Cell density Microgliosis is defined as an elevated number of microglia cells (Fig. , Step 2). For better comparison, only cells with a visible soma/nucleus should be taken into account. Fine processes of microglial cells whose somata are not visible and most likely located in the consecutive tissue section should not be counted. A microgliosis is always a sign of ongoing pathology that is caused by microglial cell proliferation and/or myeloid cell infiltration. Unless a developing brain with a physiological microgliosis is examined, the observation of higher numbers of microglia cells alone proves that the tissue is not completely homeostatic or healthy. The microgliosis can either be a sign for an active microglia-driven process or alternatively occur in response to any neighboring pathological events. Therefore, any microgliosis in a patient specimen requires further examination. Of note, the staining procedure including the selection of antibody clone, thickness of the section and the incubation times may affect the number of cells labelled. In mice, the exact hygiene status of an animal facility highly influences the number of cells observed. Consequently, the comparison with age-matched controls is essential. Step 3: Cell shape As brain-resident macrophages, microglia usually have a ramified, spider-like shape. In the homeostatic CNS, microglia mostly present with a small soma and multiple processes (Fig. , Step 3). The processes are thin, also at the junction with the cell body. There are numerous ramifications. The thickness of the arms hardly changes in the course, making them look like a line. Upon activation, microglia rapidly change their morphology within a few minutes. Typically, the microglial somata appear bigger and the arms are thicker. Occasionally, the processes taper to a point. In this case, they have a bigger diameter at the soma and a smaller diameter in the periphery. This phenotype has previously been described as “thorny” or “spiky”. The processes can be shorter with less ramifications (Supplementary Fig. 2b). Thus, the whole cell may occupy a smaller area. Altogether, activated microglia cells look less delicate and more condensed. Light microscopy is very well suited for the assessment of the microglial morphology. Quantifiable morphometrical data (e.g. dendrite length, number of segments, number of branch points, cell volume) can be obtained by fluorescent immunohistochemistry, confocal microscopy and 3D reconstruction . During homeostasis, microglia are considered to be long-lived in both mouse and man. They show only a small self-renewal rate through proliferation in the adult at a rate of 0.5% . There is no infiltration of circulating immune cells into the healthy CNS parenchyma. In contrast, myeloid cells cross the blood–brain barrier (BBB) and enter the CNS together with lymphocytes under neuroinflammatory conditions (e.g. in patients with active multiple sclerosis). Like parenchymal microglia, infiltrating parenchymal myeloid cells such as monocytes are also Iba1-positive. After infiltration into the brain parenchyma, bone marrow-derived macrophages initially retain their round shape, which allows them to be identified at this stage. Over time, the cells acquire a phenotype highly resembling brain-resident microglia. This phenotype includes transcriptional and morphological features. Of note, after ablation of microglia in mice, the microglial compartment is reconstituted by proliferation of CNS-resident cells and independent from bone-marrow derived precursors . Iba1-positive cells with a round shape may point towards an infiltration of hematopoietic cells. Immunohistochemistry with markers for leucocytes, for example CD3 for T cells and CD20/B220 for B cells, might be useful to investigate this in more detail. However, fully activated microglia may also appear roundish and foamy and can no longer be distinguished from infiltrating Iba1 + monocytes. Step 4: Distribution pattern The distribution pattern of microglia should be carefully examined considering several aspects. First, are the microglial features observed homogenous in the whole tissue section? Occasionally, the microglial phenotype differs regionally within the same section. For instance, white matter microglia may present with more activated features than grey matter microglia (Fig. , step 4). Perivascular microglia accumulation might point towards a vascular pathology. Second, do the individual microglia respect each other’s territory? In a physiological brain, the distance between a microglial cell and the surrounding microglia is comparatively constant; spots with two or more microglia cells accumulating are fairly rare (Fig. , step 4, upper left). Vice versa, finding this pattern commonly points towards pathological processes (Supplementary Fig. 2e). Iba1-positive structures with many microglia cells accumulating and indistinguishable cell borders are called microglia nodules. They are commonly found in chronic inflammatory conditions, including viral infections or putative autoimmune diseases, such as multiple sclerosis . In particular, microglia nodules have been described in the context of HIV and COVID-19 encephalopathy . Step 5: Distinct microglial phenotypes Next, the researcher should look for rare but pretty distinct microglial phenotypes. Among these are the microglial nodules discussed earlier (Fig. , step 5). An increased Iba1 + cell density (see “Step : cell density”) can primarily result from higher numbers of invading blood-borne cells or an increase in self-renewal by proliferation. In the latter instance, it may be possible to detect microglia cells that are in the process of division . The most reliable evidence is certainly the presence of mitotic figures within microglia cells. However, due to the short window within the cell cycle, mitotic figures can only be detected rarely. For this reason, it can be useful to combine Iba1 immunohistochemistry with the proliferation marker Ki-67 . Even beyond the slightly longer time window of Ki-67 positivity, there may be signs of previous cell divisions. Thus, two closely located microglia cells with a connecting cytoplasmic bridge strongly suggests that the cells arose from a single cell that has been dividing. Small bulgings on thin microglia processes, named knot-like structures, were demonstrated in brain samples from patients with hereditary diffuse leukoencephalopathy with spheroids (HDLS) . The disease is characterized by various neurological symptoms including dementia. It is caused by different mutations in the CSF1R gene. Since the gene is predominantly expressed in microglia in the brain, the disease is considered a “primary microgliopathy” . Foam cells are transformed macrophages whose cytoplasm appears foamy and bubbly because of previously phagocytosed material, primarily lipids . In the periphery, the formation of foam cells has been studied well in the context of atherosclerosis . Foam cells can also be found within the CNS (Supplementary Fig. 2 h). They are typically found in multiple sclerosis lesions . In this case, the foam cells may also contain myelin . The degradation of the myelin components follows a predictable temporal sequence. Thus, the chemical profile of the myelin degradation products can be used as a precise marker for the time dependent evolution of a lesion . As a sign of an active debris clearance, foam cells are occasionally observed in close proximity to brain tumors or CNS abscesses. Under certain neurodegenerative conditions in the human brain and in inactive lesions of multiple sclerosis the global number (density) of microglia is reduced. This appears to be a consequence of microglia senescence during active disease . Microglia senescence is characterized by clumping and loss of cell processes finally resulting in cell death by apoptosis. Senescent microglia can be visualized by immunohistochemical staining for ferritin, since it is associated with microglia iron load . Senescent microglia may be a result of oxidative injury and may be one of the reasons for microglia dysfunction in the cortex of patients with Alzheimer’s disease. Step 6: Interaction with other cell types and structures Finally, when assessing microglia using light microscopy, the examiner should look for any signs of excessive interactions and/or physical contacts with other cell types or structures. In the case of close physical contact with neurons, the phagocytosis of neurons by microglia, so-called neuronophagy, could be observed (Fig. , step 6). The interaction with oligodendrocytes is of special interest, in particular regarding inflammatory demyelinating disorders such as MS. The Luxol-Fast-Blue-Periodic-Acid-Schiff (LFB-PAS) stain can help to find demyelinating plaques. Moreover, small cells containing vibrant blue-stained myelin fragments demonstrate myeloid cells that are actively phagocyting white matter components. Iba1-positive cells are also found after ischemic events: the histopathological findings in a subacute ischemic brain infarct, phase II, includes the infiltration of macrophages . In neurodegenerative disorders characterized by the formation of Aβ deposits, plaque-associated microglia can be present . Both microglia and macrophages are known for colonizing CNS neoplasms, as for example gliomas. The so-called tumor-associated macrophages (TAMs) have been carefully characterized and their therapeutic potential is currently being explored . Iba1 is a reliable cytoplasmic microglial marker with a strong signal, labelling both cell bodies and processes (Fig. , Step 1, Supplementary Fig. 2a). Nevertheless, there is a major pitfall in using this marker, since Iba1 does not exclusively label microglia cells. It also marks other cell types such as perivascular or meningeal macrophages or even infiltrating myeloid cells such as monocytes. Due to their location within the meninges, meningeal macrophages can easily be identified. Distinguishing perivascular macrophages from microglia can be more challenging. Perivascular macrophages typically present with an elongated shape and unlike microglia without many processes. By nature, identifying vessels helps to find perivascular macrophages. With haematoxylin as counterstaining, vessels may show a lumen. Only if the vessel is cut transversally, a roundish shape might be visible. Longitudinally cut vessels show series of elongated nuclei and potentially an elongated lumen. In cortical biopsies, the course of the vessels is typically perpendicular to the cortical tissue surface, which can help to identify vessels and subsequently perivascular macrophages. After excluding meningeal and perivascular macrophages, the remaining parenchymal Iba1-labelled cells with processes are microglia. Notably, infiltrating monocytes would also be Iba1-positive. Although the cell shape may help to distinguish them from microglia to some extent, infiltrated monocytes may become ramified in the brain extracellular space . Microgliosis is defined as an elevated number of microglia cells (Fig. , Step 2). For better comparison, only cells with a visible soma/nucleus should be taken into account. Fine processes of microglial cells whose somata are not visible and most likely located in the consecutive tissue section should not be counted. A microgliosis is always a sign of ongoing pathology that is caused by microglial cell proliferation and/or myeloid cell infiltration. Unless a developing brain with a physiological microgliosis is examined, the observation of higher numbers of microglia cells alone proves that the tissue is not completely homeostatic or healthy. The microgliosis can either be a sign for an active microglia-driven process or alternatively occur in response to any neighboring pathological events. Therefore, any microgliosis in a patient specimen requires further examination. Of note, the staining procedure including the selection of antibody clone, thickness of the section and the incubation times may affect the number of cells labelled. In mice, the exact hygiene status of an animal facility highly influences the number of cells observed. Consequently, the comparison with age-matched controls is essential. As brain-resident macrophages, microglia usually have a ramified, spider-like shape. In the homeostatic CNS, microglia mostly present with a small soma and multiple processes (Fig. , Step 3). The processes are thin, also at the junction with the cell body. There are numerous ramifications. The thickness of the arms hardly changes in the course, making them look like a line. Upon activation, microglia rapidly change their morphology within a few minutes. Typically, the microglial somata appear bigger and the arms are thicker. Occasionally, the processes taper to a point. In this case, they have a bigger diameter at the soma and a smaller diameter in the periphery. This phenotype has previously been described as “thorny” or “spiky”. The processes can be shorter with less ramifications (Supplementary Fig. 2b). Thus, the whole cell may occupy a smaller area. Altogether, activated microglia cells look less delicate and more condensed. Light microscopy is very well suited for the assessment of the microglial morphology. Quantifiable morphometrical data (e.g. dendrite length, number of segments, number of branch points, cell volume) can be obtained by fluorescent immunohistochemistry, confocal microscopy and 3D reconstruction . During homeostasis, microglia are considered to be long-lived in both mouse and man. They show only a small self-renewal rate through proliferation in the adult at a rate of 0.5% . There is no infiltration of circulating immune cells into the healthy CNS parenchyma. In contrast, myeloid cells cross the blood–brain barrier (BBB) and enter the CNS together with lymphocytes under neuroinflammatory conditions (e.g. in patients with active multiple sclerosis). Like parenchymal microglia, infiltrating parenchymal myeloid cells such as monocytes are also Iba1-positive. After infiltration into the brain parenchyma, bone marrow-derived macrophages initially retain their round shape, which allows them to be identified at this stage. Over time, the cells acquire a phenotype highly resembling brain-resident microglia. This phenotype includes transcriptional and morphological features. Of note, after ablation of microglia in mice, the microglial compartment is reconstituted by proliferation of CNS-resident cells and independent from bone-marrow derived precursors . Iba1-positive cells with a round shape may point towards an infiltration of hematopoietic cells. Immunohistochemistry with markers for leucocytes, for example CD3 for T cells and CD20/B220 for B cells, might be useful to investigate this in more detail. However, fully activated microglia may also appear roundish and foamy and can no longer be distinguished from infiltrating Iba1 + monocytes. The distribution pattern of microglia should be carefully examined considering several aspects. First, are the microglial features observed homogenous in the whole tissue section? Occasionally, the microglial phenotype differs regionally within the same section. For instance, white matter microglia may present with more activated features than grey matter microglia (Fig. , step 4). Perivascular microglia accumulation might point towards a vascular pathology. Second, do the individual microglia respect each other’s territory? In a physiological brain, the distance between a microglial cell and the surrounding microglia is comparatively constant; spots with two or more microglia cells accumulating are fairly rare (Fig. , step 4, upper left). Vice versa, finding this pattern commonly points towards pathological processes (Supplementary Fig. 2e). Iba1-positive structures with many microglia cells accumulating and indistinguishable cell borders are called microglia nodules. They are commonly found in chronic inflammatory conditions, including viral infections or putative autoimmune diseases, such as multiple sclerosis . In particular, microglia nodules have been described in the context of HIV and COVID-19 encephalopathy . Next, the researcher should look for rare but pretty distinct microglial phenotypes. Among these are the microglial nodules discussed earlier (Fig. , step 5). An increased Iba1 + cell density (see “Step : cell density”) can primarily result from higher numbers of invading blood-borne cells or an increase in self-renewal by proliferation. In the latter instance, it may be possible to detect microglia cells that are in the process of division . The most reliable evidence is certainly the presence of mitotic figures within microglia cells. However, due to the short window within the cell cycle, mitotic figures can only be detected rarely. For this reason, it can be useful to combine Iba1 immunohistochemistry with the proliferation marker Ki-67 . Even beyond the slightly longer time window of Ki-67 positivity, there may be signs of previous cell divisions. Thus, two closely located microglia cells with a connecting cytoplasmic bridge strongly suggests that the cells arose from a single cell that has been dividing. Small bulgings on thin microglia processes, named knot-like structures, were demonstrated in brain samples from patients with hereditary diffuse leukoencephalopathy with spheroids (HDLS) . The disease is characterized by various neurological symptoms including dementia. It is caused by different mutations in the CSF1R gene. Since the gene is predominantly expressed in microglia in the brain, the disease is considered a “primary microgliopathy” . Foam cells are transformed macrophages whose cytoplasm appears foamy and bubbly because of previously phagocytosed material, primarily lipids . In the periphery, the formation of foam cells has been studied well in the context of atherosclerosis . Foam cells can also be found within the CNS (Supplementary Fig. 2 h). They are typically found in multiple sclerosis lesions . In this case, the foam cells may also contain myelin . The degradation of the myelin components follows a predictable temporal sequence. Thus, the chemical profile of the myelin degradation products can be used as a precise marker for the time dependent evolution of a lesion . As a sign of an active debris clearance, foam cells are occasionally observed in close proximity to brain tumors or CNS abscesses. Under certain neurodegenerative conditions in the human brain and in inactive lesions of multiple sclerosis the global number (density) of microglia is reduced. This appears to be a consequence of microglia senescence during active disease . Microglia senescence is characterized by clumping and loss of cell processes finally resulting in cell death by apoptosis. Senescent microglia can be visualized by immunohistochemical staining for ferritin, since it is associated with microglia iron load . Senescent microglia may be a result of oxidative injury and may be one of the reasons for microglia dysfunction in the cortex of patients with Alzheimer’s disease. Finally, when assessing microglia using light microscopy, the examiner should look for any signs of excessive interactions and/or physical contacts with other cell types or structures. In the case of close physical contact with neurons, the phagocytosis of neurons by microglia, so-called neuronophagy, could be observed (Fig. , step 6). The interaction with oligodendrocytes is of special interest, in particular regarding inflammatory demyelinating disorders such as MS. The Luxol-Fast-Blue-Periodic-Acid-Schiff (LFB-PAS) stain can help to find demyelinating plaques. Moreover, small cells containing vibrant blue-stained myelin fragments demonstrate myeloid cells that are actively phagocyting white matter components. Iba1-positive cells are also found after ischemic events: the histopathological findings in a subacute ischemic brain infarct, phase II, includes the infiltration of macrophages . In neurodegenerative disorders characterized by the formation of Aβ deposits, plaque-associated microglia can be present . Both microglia and macrophages are known for colonizing CNS neoplasms, as for example gliomas. The so-called tumor-associated macrophages (TAMs) have been carefully characterized and their therapeutic potential is currently being explored . After purification using a 37% Percoll gradient, microglia can be identified by flow cytometry with leukocyte common antigen (CD45) and integrin alpha M (ITGAM, CD11b) as commonly used markers . These markers can also be used to target CNS-associated macrophages (CAMs, e.g. perivascular macrophages) if the tissue has been digested enzymatically before . Cells positive for lineage markers (T-cell surface glycoprotein CD3, B-lymphocyte antigen CD19, B-lymphocyte antigen CD20) should ideally be excluded. With its spatial resolution and the possibility of assessing the cell morphology, immunohistochemistry is particularly well suited for the analysis of microglia. As discussed earlier, the immunohistochemical reaction against Iba1 labels brain-resident microglia, CAMs as well as blood-borne myeloid cells that invade the brain parenchyma under certain conditions. Transmembrane protein 119 (TMEM119) and P2Y purinoceptor 12 (P2RY12) were reported as novel markers that help to discriminate between microglia, CAMs and infiltrating myeloid cells. Both markers were shown to be expressed on microglia, but not on CAMs and infiltrating monocytes. Therefore, a cell within the brain parenchyma with positivity for TMEM119 or P2RY12 can be unambiguously identified as microglia cell (Fig. ). P2RY12 is an excellent marker for the homeostatic phenotype of microglia and is rapidly lost in many pathologies. Although TMEM119 is also downregulated upon activation, this process is incomplete and often takes more time. Both markers have been used in a detailed phenotypic characterization of microglia in multiple sclerosis lesions . Thus, TMEM119 is a good marker for microglia in initial and early lesions, but becomes less useful with lesion maturation (Fig. ). Further microglia characterization was then achieved by the use of markers, which define specific functional states. As examples, the expression of MHC Class I or II antigens or CD86 is related to antigen presentation, CD68 to phagocytosis, iNOS and molecules of the NOX-2 complex to the production of nitric oxide and oxygen intermediates, and Fc- and complement receptors to the uptake of opsonized tissue elements. Ferritin indicates iron loading. Novel CAM-specific markers are supplementing the technical repertoire. The gene Mrc1 encoding for the mannose receptor (CD206), for instance, is expressed on CAMs and barely on microglia. Perivascular macrophages, for instance, also express the scavenger receptor cysteine-rich type 1 protein M130 (CD163), a receptor for hemoglobin–haptoglobin complexes . While homeostatic microglia show no expression of CD163, it has been shown that parenchymal myeloid cells can upregulate CD163 under certain conditions . Commonly used markers are summarized in Table . During the last decade, the microglial transcriptome has been studied extensively in mice and humans under both homeostatic and disease conditions . Despite minor differences across different studies, there is a unique microglial core gene signature that is partially overlapping with other tissue macrophages outside the CNS. Among those, genes encoding for transcription factors (e.g. Sall1 ), (cell surface) receptors ( Cx3cr1 , Gpr34 , Fcrls , P2ry12, Csf1r ), and enzymes ( Hexb ) have been described (Fig. ). Of note, some genes are downregulated upon activation, for example P2RY12 as described above. Microglia cells can be identified by their distinct transcriptomic signature . Histone modifications (e.g. H3K9 and H3K27 acetylation or H3K4 (tri)methylation ) can be identified by chromatin immunoprecipitation DNA-sequencing (ChIP-Seq). Those histone modifications may lead to enhanced chromatin accessibility, ultimately influencing the cell’s transcriptome . An assay of transposase-accessible chromatin and subsequent sequencing (ATAC-seq) is commonly used to investigate the genome-wide chromatin accessibility. While conventional imaging methods suggest that microglia form a comparatively homogenous population, recent studies demonstrated that microglia are heterogeneous. By using single-cell RNA-sequencing technology, microglia can be divided into distinct (disease-specific) states or subclusters . The gene expression pattern of the disease-associated clusters is of particular interest. Since novel, therapeutic targets may only be expressed on cells of a certain cluster, they can easily be missed when analyzing all microglia as a whole. Consequently, the analysis on the single-cell level is a promising approach in microglia research. So far, many CNS pathologies and their respective animal models have been analyzed using single-cell RNA sequencing approaches, e.g. Alzheimer’s disease , Multiple Sclerosis or gliomas . Intracerebral macrophages other than microglia, i.e. perivascular macrophages, meningeal macrophages and choroid plexus macrophages are commonly summarized as CAMs or border-associated macrophages (BAMs) . Single-cell technologies have tremendously helped to better understand CAMs/BAMs in health and disease . Single-cell RNA-sequencing was able to define myeloid cell clusters in isocitrate dehydrogenase (IDH) 1 mutant astrocytomas that were not present in IDH-1 wild-type gliomas, which may have both diagnostic and therapeutic consequences . New high-throughput technologies continuously complement the existing spectrum of experimental methods, in particular in the field of proteomics , metabolomics and lipidomics (Fig. ). Cytometry by time-of-flight (CyTOF) allows the simultaneous investigation of more than forty markers in single-cell suspensions, strongly expanding the possible number of channels comparted to conventional flow cytometry . CyTOF-based techniques, e.g. imaging mass cytometry (IMC) or multiplexed ion beam imaging (MIBI), add spatial information to single-cell data at protein level . Using IMC together with clinical data such as long-term survival, new subgroups of disease entities could be identified . Moreover, this technique has recently been used to characterize the immune landscape in the brainstem of patients who died of COVID-19 . Furthermore, novel techniques combining different analyses are emerging, e.g. cellular indexing of transcriptomes and epitopes by sequencing (CITE-seq) or RNA expression and protein sequencing (REAP-seq) linking transcriptomics and proteomics . Different subsets of tumor-associated macrophages in glioblastomas could be identified by applying CITE-seq . In-situ sequencing (ISS) or in-situ capture approaches (e.g. the Cartana or Visium platform) aim to link transcriptomic information to spatial distribution . Another RNA sequencing technique with spatial resolution is Slide-seq that was used to characterize traumatic brain injury in mice . The amount of data obtained through the use of novel high-throughput methods is huge and data analysis is quite complex. However, to what extent the findings from animal models can be transferred to the situation in a human being is a recurring question. In this regard, especially three aspects are important to note. First, in spite of a common core gene expression signature that is preserved across evolution, microglia from animals and humans show distinct differences in gene expression and (as a consequence) presumably in cell function. Secondly, the genetic basis is more diverse in human individuals compared to inbred mice. It is therefore conceivable that some individuals show a certain predisposition for enhanced microglia activation. This hypothesis is supported by the clinical course of microgliopathies. Microgliopathies form a group of hereditary diseases in which a gene mutation in myeloid cells leads to neurodegenerative symptoms in particular. Studying those rare entities might lead to a better understanding of microglia-driven brain pathologies in general. Thirdly, it has been shown that microglia are both reacting quickly and acting in a long-term manner, likely due to an epigenetic reprogramming of the cells . Thus, an observed microglial phenotype can be caused by any prior alteration of brain homeostasis. This means that comorbidities, medications, former and current infections and any other environmental factor throughout life have to be taken into account when analyzing tissue specimens. While those factors should be well controlled in animal models, choosing the right control group for human samples can be challenging. All novel methods come with their very own challenges and hurdles. Those challenges can be of a technical (e.g. low RNA sequencing depth) or conceptual nature (e.g. high gene expression pattern does not necessarily lead to higher protein levels). Another issue is the tissue availability. Many techniques do require specific tissue preservation protocols other than formaldehyde-fixed paraffin-embedded tissue (FFPE). Novel technologies such as the CyTOF-based Imaging Mass Cytometry also use FFPE samples, granting access to biobanks with archival human material. This technique has recently been used to examine the changes within the CNS after SARS-CoV-2 infection . Using FFPE tissue, this method may connect cutting-edge technology with daily pathological routine diagnostics. Despite all technical progress, research projects using sophisticated novel technologies do need a clear experimental design and research strategy. Respective investigations have to be performed with a clearly defined research plan. The human material should be classified perfectly and stored adequately. Proper controls including suitable disease controls must be included. Therefore, the collection of an optimal patient cohort remains a major challenge when studying CNS pathologies. Since microglia rapidly react to even subtle alterations in CNS homeostasis, they can be seen as sensors for neurological dysfunctions and/or disorders. Many of them are far from being understood completely. Investigating microglia in the disease context might lead to a better understanding of the respective disease itself. Given their ability to modulate CNS pathologies, studying microglia possibly creates new therapeutic options as well . Vice versa, the absence of microglia alterations strongly indicates the absence of any neuropathology. In this review, we have summarized the technical possibilities for analyzing microglia phenotypes in tissue specimens obtained from humans and animals. We have proposed a simple step-by-step protocol for analyzing microglia phenotypes in histology. It should enable researchers of any specialty to evaluate the microglial phenotype in any given tissue sample. Furthermore, it should provide ideas for more detailed, subsequent experiments. So far, virtually all available techniques for the analysis of microglia cells rely on tissue: either as a whole (e.g. for histological analyses) or processed (e.g. as a single-cell suspension for single-cell RNA sequencing). This is not a major constraint for the analysis of animal models or post-mortem autopsy cases. Contrarily, it is a limitation in patients with a suspected brain pathology. With very good reason, CNS biopsies are not taken carelessly or casually. However, not having a biopsy specimen implies not knowing anything about the microglial phenotype that could inform about the underlying (or even a just developing) pathology. Therefore, the development of non-invasive techniques for the analysis of the microglial phenotypes might close the gap. As such, PET imaging with microglia-specific radiotracers in the future sounds tempting. Detecting disease-specific microglia metabolites in liquid biopsies might be another promising approach. Taken together, the technical possibilities for studying microglia have evolved rapidly in recent years. New methods provided unprecedented insights into microglia biology in general as well as into the etiology and course of many CNS disorders. Despite of all the progress made, the potential of microglia has certainly not yet been fully exploited. Many more exciting years in microglia research are yet to come. Below is the link to the electronic supplementary material. Supplementary Figure 1: Exemplary step-by-step protocol for performing an immunohistochemistry on human or murine tissue sections as commonly performed at the Institute for Neuropathology at the University Medical Center Freiburg. The protocol for the preparation of cryopreserved sections (upper left) or FFPE sections (upper right) is shown. Below, the steps for fluorescent (lower left) and chromogenic (lower right) immunohistochemistries (e.g. for Iba1, TMEM119 or P2RY12) are explained. Supplementary Figure 2: Different microglia/macrophage morphologies are depicted in exemplary patient samples. The immunohistochemistry for Iba1 (brown) is exemplarily shown in different CNS pathologies. Counterstaining with haematoxylin (blue). Scale bar: 100 µm. a: CNS myeloid cells first need to be identified based on the anatomical location: a haematopoetic Iba1-positive cell can be observed within a blood vessel (asterisk). Within the meninges, meningeal macrophages are labelled by Iba1 (blue arrows). Perivascular macrophages present with an elongated shape and less ramifications compared to microglia (green arrows). The density of parenchymal microglia (black arrows) appears normal. The cells are ramified and do not show a spiky phenotype. The distribution pattern appears regular with the cells respecting each other’s territory. No distinct microglial phenotypes are observed. Moreover, there is no excessive interaction with other cell types. In sum, the microglial phenotype appears homeostatic. b: The cell density in sample b is comparable to sample a. Microglia with ramifications can be found (arrows). In some areas, small parenchymal Iba1-positive cells with less protrusions are visible, resembling an activated phenotype. Nevertheless, the cells do respect each other’s territory. c: Iba1-positive cells with a characteristic “spiky” morphology are seen (arrows). d: A similar phenotype with “thorny” cells can be observed in d. Some cells appear to have less protrusions (asterisks). e: Spatial differences can be identified in e. The cells in closer vicinity to the lesion (blue arrow) look more condensed compared to more distant microglia (green arrow) showing fine protrusions. The cells even closer to the lesion do no longer have any processes. They show overlapping territories. f: The macrophages in the glioblastoma specimen in f are comparatively large in size (arrows). g: Some ramified cells can be found in sample g (arrow). A large number of Iba1-positive cells appears small and round, indicating haematopoetic cells. h: Iba1-positive haematopoetic cells are also found within blood vessels in h (asterisks). A perivascular macrophage can be identified (blue arrow). A few cells show a slightly ramified morphology (green arrow). Many cells have undergone foam cell transformation (black arrows), indicating an active debris clearance. (PDF 7066 KB) |
Left ventricular unloading via percutaneous assist device during extracorporeal membrane oxygenation in acute myocardial infarction and cardiac arrest | c9995fbf-14dc-4b7c-8dfb-4c00441de112 | 11826357 | Surgical Procedures, Operative[mh] | Over 800,000 people suffer an acute myocardial infarction (AMI) per year in the United States. Despite modern revascularization strategies, 10% of patients develop cardiogenic shock (CS), resulting in systemic hypoperfusion and end organ dysfunction. , Cardiac dysfunction post-AMI may result in impaired ventricular contraction, leading to decreased cardiac output, decreased coronary perfusion, and hemodynamic instability. , , Furthermore, myocardial injury may continue following the initial insult as the infarct extends circumferentially toward the subepicardial region causing greater decline in myocardial function. Over 50% of AMI-CS patients may experience a cardiac arrest (CA), either as a preceding event or as sequalae of CS itself, with in-hospital mortality rates as high as 60%. , , Patients with AMI-CS often require additional hemodynamic support with pharmacotherapy or temporary mechanical circulatory support (tMCS) both prior to and during revascularization. , Patients may also remain in CS despite successful revascularization. Thus, the management of these patients poses a formidable challenge, demanding innovative strategies to improve survival. While inotropes and vasopressors remain a staple of therapy in CS, the past two decades have seen increases in the use of tMCS including venoarterial extracorporeal membrane oxygenation (VA-ECMO). By providing systemic oxygenated blood flow, organ perfusion is supported irrespective of intrinsic cardiac function. While VA-ECMO decreases myocardial work in CS patients, retrograde arterial flow increases left ventricular (LV) afterload with deleterious effects on LV recovery. – Numerous unloading strategies have been deployed to attenuate the effects of VA-ECMO on LV afterload, with the goal of improving myocardial recovery and survival. One common unloading strategy in patients on VA-ECMO is concomitant use of the Impella device. The Impella catheter (Abiomed) is an microaxial pump that traverses the aortic valve and provides continuous blood flow from the left ventricle into the aorta, thus decreasing LV afterload and myocardial work. , Prior studies demonstrated a potential mortality benefit with VA-ECMO with simultaneous Impella (ECPELLA) compared to VA-ECMO alone. , , However, these studies focused on all causes of CS, not AMI-CS with concomitant CA. Therefore, the aims of this study were to determine whether a mortality difference was observed in VA-ECMO alone versus ECPELLA in patients with AMI-CS and CA and to determine the frequency of complication rates between cohorts. This single-center retrospective cohort study was approved by the local institutional review board. The need for informed consent was waived given the retrospective nature of the study. Study design and participants A retrospective review of all patients placed on VA-ECMO or ECPELLA between 2017 and 2022 at two tertiary care centers within the same health-system was performed. Inclusion criteria consisted of patients greater than 18-years-old with AMI-CA and CA treated with VA-ECMO or ECPELLA. AMI was diagnosed according to the fourth universal definition. CA was defined as cessation of cardiac mechanical activity evidenced by an absence of signs of circulation. , Patients who were cannulated for extracorporeal cardiopulmonary resuscitation (eCPR) were also included. Determination of tMCS strategy was made via multidisciplinary Shock Team consisting of interventional cardiologists, advanced heart failure specialists, intensivists, and cardiothoracic surgeons. The definition of shock included: (1) systolic blood pressure <90 mmHg without inotropes or need of inotropes/vasopressors to maintain systolic blood pressure >90 mmHg; (2) pulmonary capillary wedge pressure >18 mmHg; (3) central venous pressure >15 mmHg; (4) cardiac index <2.2 L/min/m 2 . If no invasive hemodynamic monitoring was available, then clinical parameters of shock included: (1) persistently elevated lactate (>2 mmol/L over 2 h) despite need for inotrope/vasopressor support; (2) signs of systemic/pulmonary overload; (3) poor end-organ perfusion (e.g. cool/mottled extremities, oliguria/acute kidney injury, and liver function test abnormalities). The decision to wean a patient from VA-ECMO or ECPELLA was made by the Shock Team. Study variables Baseline demographic characteristics and clinical data were collected via the electronic health record. To characterize the varying levels of acuity between cohorts, both the survival after veno-arterial ECMO (SAVE) and vasoactive ionotropic score (VIS) was calculated for each patient. , Clinical endpoints The primary outcome was 6-month mortality from initial cannulation of VA-ECMO cannulation or Impella implantation. Secondary outcomes were in-hospital mortality, complication rates, and intensive care unit data. Complication rates were calculated as both binary outcomes and per patient week (censored at time of device removal). Complications included intracranial bleeding, ischemic stroke, hypoxic brain injury, pericardial tamponade/effusion, access site ischemic event, cannula site infection, sepsis, bowel ischemia/compartment syndrome, hemolysis (lactate dehydrogenase above 1,000 U/l), arrhythmias, acute kidney injury (baseline creatine increase of 0.3 mg/dL), new renal replacement therapy, and mechanical ventilation duration. The bleeding academic research consortium (BARC) definition was used to define bleeding severity. Management of mechanical circulatory support All patients underwent peripheral cannulation with a standard VA-ECMO circuit consisting of venous inflow cannula, centrifugal pump and oxygenator, and arterial outflow cannula. Patients were intubated and sedated prior to cannulation. To prevent limb ischemia, a distal perfusion cannula was often placed at time of VA-ECMO cannulation. At our institution, distal perfusion cannulas are placed in the superficial femoral artery percutaneously, and the decision to place a distal perfusion cannula at the time of cannulation is under discretion of the attending physician. However, all VA-ECMO patients were monitored with continual monitoring of near-infrared spectroscopy and regular doppler pulses. A distal perfusion cannula was subsequently placed for any evidence of lower extremity perfusion loss if not placed during time of cannulation. For the ECPELLA group, either the Impella 2.5 ® (Abiomed) or Impella CP ® (Abiomed) was utilized. In a few cases, the Impella devices were placed at outside institutions. Devices implanted at our centers were placed via the femoral artery and then guided into the LV across the aortic valve. Impella device speeds were optimized under discretion of the Shock Team with the goal of minimizing hemolysis, suction events, catheter thrombosis, and malpositioning. Pump position was checked with daily chest X-rays and device repositioning was performed under echocardiographic guidance. Both VA-ECMO and Impella 2.5 ® (Abiomed) or Impella CP ® (Abiomed) support were weaned as signs of shock diminished and myocardial function improved. Typically, VA-ECMO was removed before the Impella device. The decision to explant devices was made at the discretion of the Shock team. Statistical analysis Single-variable comparisons for continuous variables were made via T -test when normally distributed and via Wilcoxon signed-rank test when non-normally distributed. Normality was validated via the Kolmogorov-Smirnov test. Categorical variables were expressed as frequencies and comparison testing was done by Chi-squared test. When applicable, the Benjamini-Hochberg procedure was used to adjust for multiple comparisons. In the survival analyses, univariable and multivariable Cox proportional hazards models were utilized. Visualization was done using Kaplan-Meier curves. For analysis of 6-month mortality, patients were censored at 6-months. For analysis of in-hospital mortality, patients were censored at time of discharge. Hazard ratios were extracted from corresponding models. For analysis of complication rates, odds ratios derived from logistic regression modeling were utilized to compare the relative binary occurrence of complications in each group. Incidence rate ratios were used to compare the relative difference in incidence over fixed unit time between cohorts. Analyses were performed in R. A retrospective review of all patients placed on VA-ECMO or ECPELLA between 2017 and 2022 at two tertiary care centers within the same health-system was performed. Inclusion criteria consisted of patients greater than 18-years-old with AMI-CA and CA treated with VA-ECMO or ECPELLA. AMI was diagnosed according to the fourth universal definition. CA was defined as cessation of cardiac mechanical activity evidenced by an absence of signs of circulation. , Patients who were cannulated for extracorporeal cardiopulmonary resuscitation (eCPR) were also included. Determination of tMCS strategy was made via multidisciplinary Shock Team consisting of interventional cardiologists, advanced heart failure specialists, intensivists, and cardiothoracic surgeons. The definition of shock included: (1) systolic blood pressure <90 mmHg without inotropes or need of inotropes/vasopressors to maintain systolic blood pressure >90 mmHg; (2) pulmonary capillary wedge pressure >18 mmHg; (3) central venous pressure >15 mmHg; (4) cardiac index <2.2 L/min/m 2 . If no invasive hemodynamic monitoring was available, then clinical parameters of shock included: (1) persistently elevated lactate (>2 mmol/L over 2 h) despite need for inotrope/vasopressor support; (2) signs of systemic/pulmonary overload; (3) poor end-organ perfusion (e.g. cool/mottled extremities, oliguria/acute kidney injury, and liver function test abnormalities). The decision to wean a patient from VA-ECMO or ECPELLA was made by the Shock Team. Baseline demographic characteristics and clinical data were collected via the electronic health record. To characterize the varying levels of acuity between cohorts, both the survival after veno-arterial ECMO (SAVE) and vasoactive ionotropic score (VIS) was calculated for each patient. , The primary outcome was 6-month mortality from initial cannulation of VA-ECMO cannulation or Impella implantation. Secondary outcomes were in-hospital mortality, complication rates, and intensive care unit data. Complication rates were calculated as both binary outcomes and per patient week (censored at time of device removal). Complications included intracranial bleeding, ischemic stroke, hypoxic brain injury, pericardial tamponade/effusion, access site ischemic event, cannula site infection, sepsis, bowel ischemia/compartment syndrome, hemolysis (lactate dehydrogenase above 1,000 U/l), arrhythmias, acute kidney injury (baseline creatine increase of 0.3 mg/dL), new renal replacement therapy, and mechanical ventilation duration. The bleeding academic research consortium (BARC) definition was used to define bleeding severity. All patients underwent peripheral cannulation with a standard VA-ECMO circuit consisting of venous inflow cannula, centrifugal pump and oxygenator, and arterial outflow cannula. Patients were intubated and sedated prior to cannulation. To prevent limb ischemia, a distal perfusion cannula was often placed at time of VA-ECMO cannulation. At our institution, distal perfusion cannulas are placed in the superficial femoral artery percutaneously, and the decision to place a distal perfusion cannula at the time of cannulation is under discretion of the attending physician. However, all VA-ECMO patients were monitored with continual monitoring of near-infrared spectroscopy and regular doppler pulses. A distal perfusion cannula was subsequently placed for any evidence of lower extremity perfusion loss if not placed during time of cannulation. For the ECPELLA group, either the Impella 2.5 ® (Abiomed) or Impella CP ® (Abiomed) was utilized. In a few cases, the Impella devices were placed at outside institutions. Devices implanted at our centers were placed via the femoral artery and then guided into the LV across the aortic valve. Impella device speeds were optimized under discretion of the Shock Team with the goal of minimizing hemolysis, suction events, catheter thrombosis, and malpositioning. Pump position was checked with daily chest X-rays and device repositioning was performed under echocardiographic guidance. Both VA-ECMO and Impella 2.5 ® (Abiomed) or Impella CP ® (Abiomed) support were weaned as signs of shock diminished and myocardial function improved. Typically, VA-ECMO was removed before the Impella device. The decision to explant devices was made at the discretion of the Shock team. Single-variable comparisons for continuous variables were made via T -test when normally distributed and via Wilcoxon signed-rank test when non-normally distributed. Normality was validated via the Kolmogorov-Smirnov test. Categorical variables were expressed as frequencies and comparison testing was done by Chi-squared test. When applicable, the Benjamini-Hochberg procedure was used to adjust for multiple comparisons. In the survival analyses, univariable and multivariable Cox proportional hazards models were utilized. Visualization was done using Kaplan-Meier curves. For analysis of 6-month mortality, patients were censored at 6-months. For analysis of in-hospital mortality, patients were censored at time of discharge. Hazard ratios were extracted from corresponding models. For analysis of complication rates, odds ratios derived from logistic regression modeling were utilized to compare the relative binary occurrence of complications in each group. Incidence rate ratios were used to compare the relative difference in incidence over fixed unit time between cohorts. Analyses were performed in R. Participants A total of 50 of the initial 271 reviewed patients met criteria and were included in the study. All patients had AMI-CS complicated by CA. There were 34 patients supported via VA-ECMO and 16 patients by ECPELLA. Baseline demographics were similar between groups . Clinical features summarizes CA profiles, hemodynamic/biochemical parameters, and catheterization data. Eighty-two percent of patients experienced out-of-hospital CA (79.4% VA-ECMO; 87.5% ECPELLA; p = 0.487). ECG findings at time of CA were variable (24% pulseless ventricular tachycardia, 24% ventricular fibrillation, 26% pulseless electric activity, and 6% asystole), but similar between groups ( p = 0.668). Fifty-four percent of patients underwent eCPR (58.8% VA-ECMO; 43.8% ECPELLA; p = 0.133). Pre-ECMO CPR time was similar between groups (30 min VA-ECMO; 35 min ECPELLA; p = 0.983). Although 84% of patients had a STEMI prior to arrest, this differed by group (91.2% VA-ECMO; 68.8% ECPELLA; p = 0.044). Forty-six percent of patients had a left anterior descending culprit lesion (41.2% in VA-ECMO; 56.3% in ECPELLA; p = 0.414), with another 36% having a left main culprit (35.3% VA-ECMO; 37.5% ECPELLA; p = 1.000). Regarding biochemical data, no single parameter was statistically different between groups. However, the SAVE score was significantly worse in the ECPELLA group (VA-ECMO median -9, 95% CI: -13.75 – -8; ECPELLA median -13, 95% CI: -14 – -10; p = 0.032). Despite this, the vasoactive-inotropic score (VIS; ) at 24 h was not significantly lower in the ECPELLA group (5.05 VA-ECMO; 4.55 ECPELLA; p = 0.433). Survival A total of 10 patients (29.4%) in VA-ECMO and 2 patients (12.5%) in the ECPELLA group survived to 6-months . Of the 15 ECPELLA patients where the Impella and VA-ECMO were not initiated simultaneously, 5 had the Impella placed after and 10 before VA-ECMO initiation. There was no difference in 6-month survival between those receiving the Impella prior to or after VA-ECMO initiation (OR = 2.25; p = 0.571). One patient in VA-ECMO cohort received a durable left ventricular assist device; no patients received a heart transplant. Based on uni-variable Cox proportional hazard model, the calculated 6-month mortality hazard ratio for ECPELLA versus VA-ECMO was 1.64 (95% CI 0.84–3.20). For in-hospital mortality, 14 (87.5 % ) ECPELLA and 23 (67.6%) VA-ECMO patients died during the index hospitalization, yielding an in-hospital mortality hazard ratio of 1.58 (0.81–3.08) for ECPELLA versus VA-ECMO ( Supplemental Figure 1 ). Inclusion of the SAVE score into a bi-variable hazard model yielded an adjusted hazard ratio of 1.17 (95% CI: 0.59–2.35; ). A multivariable-Cox proportional hazard’s model which accounted for all analyzed variables yielded an adjusted hazard ratio of 1.02 (95% CI: 0.16–6.64; ). Factors significantly associated with mortality included SAVE score, presence of previous AMI, ejection fraction after recovery of spontaneous circulation, concomitant presence of systemic inflammatory response syndrome, obesity, smoking history, initial/peak lactate, presenting creatinine, and lowest pH during intervention. Of these, SAVE score demonstrated the greatest effect modification. Neither the uni-variable or adjusted multiple-variable hazard ratios demonstrated a significant difference between VA-ECMO and ECPELLA. As noted previously, the ECPELLA group had worse SAVE scores (but not VIS) than the VA-ECMO groups, with ECPELLA patients tending to have SAVE scores ranging between −10 and −15, as compared to the VA-ECMO groups, which clustered between −5 and −10 . As expected, lower SAVE scores corresponded to increased mortality . Complication rates A list of evaluated complications is shown in Supplemental Table 1 . Complications were further divided for purposes of pooled analysis into minor (minor bleeding, hemolysis, access site-related ischemia, cannula site infection, sepsis/SIRS, cardiac arrhythmias, and acute kidney injury) and severe (significant bleeding, strokes, abdominal compartment syndrome or bowel ischemia, hypoxic brain damage, and renal replacement therapy need). With regard to neurologic outcomes, overall, 18 (53%) of VA-ECMO patients and 5 (31%) of ECPELLA patients had a neurologic event, with no statistically significant difference in the frequency of each of the following types of neurologic events between both cohorts: intracranial bleeds (3% VA-ECMO vs 6% ECPELLA, p = 0.542), ischemic stroke (18% VA-ECMO vs 6% ECPELLA, p = 0.406), and hypoxic brain damage (32% VA-ECMO vs 19% ECPELLA, p = 0.501). While there were no significant differences in any individual complications between the ECEPLLA and VA-ECMO cohorts , differences were observed in terms of incidence rate ratio . The ECPELLA cohort showed a higher incidence rate of minor bleeding (IRR 2.36; 95% CI 1.16–4.80), arrhythmias (IRR 2.67; 95% CI 1.25–5.71), acute kidney injury (IRR 2.58; 95% CI 1.36–4.93) and need for renal replacement therapy (IRR 2.86; 95% CI 1.22–6.7). Overall ECPELLA, relative to VA-ECMO, had an increased rate of minor complications (IRR 2.48; 95% CI 1.77–3.48) without an increased rate of severe complications (IRR 1.4; 95% CI 0.82–2.4). A total of 50 of the initial 271 reviewed patients met criteria and were included in the study. All patients had AMI-CS complicated by CA. There were 34 patients supported via VA-ECMO and 16 patients by ECPELLA. Baseline demographics were similar between groups . summarizes CA profiles, hemodynamic/biochemical parameters, and catheterization data. Eighty-two percent of patients experienced out-of-hospital CA (79.4% VA-ECMO; 87.5% ECPELLA; p = 0.487). ECG findings at time of CA were variable (24% pulseless ventricular tachycardia, 24% ventricular fibrillation, 26% pulseless electric activity, and 6% asystole), but similar between groups ( p = 0.668). Fifty-four percent of patients underwent eCPR (58.8% VA-ECMO; 43.8% ECPELLA; p = 0.133). Pre-ECMO CPR time was similar between groups (30 min VA-ECMO; 35 min ECPELLA; p = 0.983). Although 84% of patients had a STEMI prior to arrest, this differed by group (91.2% VA-ECMO; 68.8% ECPELLA; p = 0.044). Forty-six percent of patients had a left anterior descending culprit lesion (41.2% in VA-ECMO; 56.3% in ECPELLA; p = 0.414), with another 36% having a left main culprit (35.3% VA-ECMO; 37.5% ECPELLA; p = 1.000). Regarding biochemical data, no single parameter was statistically different between groups. However, the SAVE score was significantly worse in the ECPELLA group (VA-ECMO median -9, 95% CI: -13.75 – -8; ECPELLA median -13, 95% CI: -14 – -10; p = 0.032). Despite this, the vasoactive-inotropic score (VIS; ) at 24 h was not significantly lower in the ECPELLA group (5.05 VA-ECMO; 4.55 ECPELLA; p = 0.433). A total of 10 patients (29.4%) in VA-ECMO and 2 patients (12.5%) in the ECPELLA group survived to 6-months . Of the 15 ECPELLA patients where the Impella and VA-ECMO were not initiated simultaneously, 5 had the Impella placed after and 10 before VA-ECMO initiation. There was no difference in 6-month survival between those receiving the Impella prior to or after VA-ECMO initiation (OR = 2.25; p = 0.571). One patient in VA-ECMO cohort received a durable left ventricular assist device; no patients received a heart transplant. Based on uni-variable Cox proportional hazard model, the calculated 6-month mortality hazard ratio for ECPELLA versus VA-ECMO was 1.64 (95% CI 0.84–3.20). For in-hospital mortality, 14 (87.5 % ) ECPELLA and 23 (67.6%) VA-ECMO patients died during the index hospitalization, yielding an in-hospital mortality hazard ratio of 1.58 (0.81–3.08) for ECPELLA versus VA-ECMO ( Supplemental Figure 1 ). Inclusion of the SAVE score into a bi-variable hazard model yielded an adjusted hazard ratio of 1.17 (95% CI: 0.59–2.35; ). A multivariable-Cox proportional hazard’s model which accounted for all analyzed variables yielded an adjusted hazard ratio of 1.02 (95% CI: 0.16–6.64; ). Factors significantly associated with mortality included SAVE score, presence of previous AMI, ejection fraction after recovery of spontaneous circulation, concomitant presence of systemic inflammatory response syndrome, obesity, smoking history, initial/peak lactate, presenting creatinine, and lowest pH during intervention. Of these, SAVE score demonstrated the greatest effect modification. Neither the uni-variable or adjusted multiple-variable hazard ratios demonstrated a significant difference between VA-ECMO and ECPELLA. As noted previously, the ECPELLA group had worse SAVE scores (but not VIS) than the VA-ECMO groups, with ECPELLA patients tending to have SAVE scores ranging between −10 and −15, as compared to the VA-ECMO groups, which clustered between −5 and −10 . As expected, lower SAVE scores corresponded to increased mortality . A list of evaluated complications is shown in Supplemental Table 1 . Complications were further divided for purposes of pooled analysis into minor (minor bleeding, hemolysis, access site-related ischemia, cannula site infection, sepsis/SIRS, cardiac arrhythmias, and acute kidney injury) and severe (significant bleeding, strokes, abdominal compartment syndrome or bowel ischemia, hypoxic brain damage, and renal replacement therapy need). With regard to neurologic outcomes, overall, 18 (53%) of VA-ECMO patients and 5 (31%) of ECPELLA patients had a neurologic event, with no statistically significant difference in the frequency of each of the following types of neurologic events between both cohorts: intracranial bleeds (3% VA-ECMO vs 6% ECPELLA, p = 0.542), ischemic stroke (18% VA-ECMO vs 6% ECPELLA, p = 0.406), and hypoxic brain damage (32% VA-ECMO vs 19% ECPELLA, p = 0.501). While there were no significant differences in any individual complications between the ECEPLLA and VA-ECMO cohorts , differences were observed in terms of incidence rate ratio . The ECPELLA cohort showed a higher incidence rate of minor bleeding (IRR 2.36; 95% CI 1.16–4.80), arrhythmias (IRR 2.67; 95% CI 1.25–5.71), acute kidney injury (IRR 2.58; 95% CI 1.36–4.93) and need for renal replacement therapy (IRR 2.86; 95% CI 1.22–6.7). Overall ECPELLA, relative to VA-ECMO, had an increased rate of minor complications (IRR 2.48; 95% CI 1.77–3.48) without an increased rate of severe complications (IRR 1.4; 95% CI 0.82–2.4). To our knowledge, this study is the first to directly compare VA-ECMO and ECPELLA in patients with AMI-CS who concomitantly sustained a CA. This marks a pivotal point, as our population may represent some of most critically ill cohorts to be compared in the context of VA-ECMO versus ECPELLA. Schmidt et al. demonstrated that a SAVE score of −9 to −5 conferred an in hospital survival probability of 30% or less, while a score less than or equal to −10 was associated with an in-hospital survival rate of 18% or less. The SAVE score was further validated by Chen et al. in which they found the SAVE score to be an independent predictor of mortality in patients on VA-ECMO. The mortality rates in our VA-ECMO and ECPELLA cohorts were 29.4% and 12.5%, respectively, which correlate with the median SAVE scores in each cohort. Despite advances in cardiac clinical care, CS shock still confers incredibly high in-hospital mortality rates. , , Other forms of tMCS including Tandem heart (Cardiac Assist, Pittsburgh PA), intra-aortic balloon pump, and Impella without VA-EMCO involvement have not demonstrated mortality benefits in refractory CS in randomized controlled trials. – Although there are no randomized trials directly comparing VA-ECMO and ECPELLA, there have been numerous retrospective studies that have shown mortality benefits favoring ECPELLA. , , , A large meta-analysis by Russo et al. demonstrated that LV unloading with an Impella device for patients on VA-ECMO decreases mortality when compared with VA-ECMO alone. Given the results of these studies, LV unloading with the ECPELLA platform seems like a plausible strategy for treating refractory CS. Although our study did not show a significant difference in mortality between the VA-ECMO and ECPELLA cohorts, this may be explained for a few reasons. As previously mentioned, the survival rates for both groups were incredibly low, which may limit the ability to detect survival differences between VA-ECMO and ECPELLA cohorts. Second, because only 50 patients met inclusion criteria, the power of our study was limited. Third, our ECPELLA population was likely sicker than our VA-ECMO population, as identified by a statistically significantly worse median SAVE score, which may have altered seeing a potential benefit from the Impella. Lastly, it should also be noted that this study occurred prior to use of the Impella 5.5 as an unloading strategy at our institution. The Impella 5.5 can provide more unloading relative to the Impella 2.5 and CP devices, so the ECPELLA with an Impella 5.5 may be of consideration in the future. In our study, we found that there was an increase in the incidence of minor complications without an increase in major complications between ECPELLA and VA-ECMO cohorts. Previous studies have shown significantly increased rates of hemolysis, severe bleeding, renal replacement, and access site complications in ECPELLA patients compared to VA-ECMO alone patients, which generally aligns with our observations. , The most prevalent severe complication with increased incidence in the ECPELLA group was need for renal replacement therapy, consistent with the observed higher rate of acute kidney injury with ECPELLA. Overall, this latter observation may help guide patient selection for ECPELLA and may also aid in guiding how selective clinicians are with exposing these patients to nephrotoxins. Despite the increased rate of minor complications with ECPELLA, the lack of an increased rate of major complications is a significant insight, as it suggests that ECPELLA may have an acceptable safety profile relative to VA-ECMO as long as a center can comfortably manage minor complications. We hope that future studies can further investigate ECPELLA-associated complications to explore whether the observed lack of difference in terms of major complications is secondary to lack of power given the relatively small sample size studied here. Our study has several limitations. First, our patient population is small thus limiting the power of our analysis. Despite covariate adjustment, the absence of randomization limits the potential to perform a true comparison of similar cohorts. Additionally, patients with CA have a high mortality independent of intervention, decreasing the feasibility to show survival benefit. Furthermore, there may be inherent biases in which patients are started on ECPELLA due to Shock Team preference. Lastly, as a retrospective analysis we are limited in our ability to control for sources of bias. In this limited retrospective study, there does not seem to be a mortality benefit with the addition of Impella 2.5 and CP to VA-ECMO in a high-risk post AMI group who sustained a concomitant CA. The observation that the cohort receiving an adjunctive Impella was sicker may have limited the ability to show a positive result. While there was an increase in the incidence of minor complications in the ECPELLA cohort, there was no difference in major complications. Future randomized controlled trials comparing ECPELLA versus VA-ECMO are required to further elucidate if the addition of Impella, especially with the Impella 5.5, to VA-ECMO in AMI with CS with concomitant CA provides a mortality benefit. sj-pdf-1-jao-10.1177_03913988241254978 – Supplemental material for Left ventricular unloading via percutaneous assist device during extracorporeal membrane oxygenation in acute myocardial infarction and cardiac arrest Supplemental material, sj-pdf-1-jao-10.1177_03913988241254978 for Left ventricular unloading via percutaneous assist device during extracorporeal membrane oxygenation in acute myocardial infarction and cardiac arrest by Jake M Kieserman, Ivan A Kuznetsov, Joseph Park, James W Schurr, Omar Toubat, Salim Olia, Christian Bermudez, Marisa Cevasco and Joyce Wald in The International Journal of Artificial Organs |
Harnessing the power of native biocontrol agents against wilt disease of Pigeonpea incited by | 7be207a2-7738-43d0-8641-01ce581ec482 | 11143286 | Microbiology[mh] | Pigeonpea ( Cajanus cajan (L.) Millsp.) holds a crucial position as a significant legume pulse crop globally, particularly in Southern and Eastern Africa, Asia, and South America, where it plays a major role in supporting the livelihoods of subsistence farmers . In India pigeonpea cultivated in 45 Lha, with annual production of 42 Lt and contributing nearly 90% of world’ acreage and production . Despite its importance, the crop faces considerable challenges, especially from biotic stresses, with Fusarium wilt caused by Fusarium udum being a major threat and causing substantial yield losses , . Fusarium wilt exhibits patchy symptoms during both seedling and adult stages, with yield losses varying depending on the stage of infection, ranging from 100% at the prepodding stage to 67% at pre-harvest and 30% at maturity. In severe cases, grain yield losses can reach up to 100% – . The pathogenic F . udum resides in the soil, entering plants through root tips and disrupting water and mineral transport in vascular bundles. Initial symptoms include interveinal chlorosis and reduced leaf turgidity, progressing to distinctive features like a purple band spreading upward from the stem base and longitudinally split open stems displaying brown discoloration of vascular tissues , , . Current management strategies primarily rely on chemical fungicides, but their effectiveness is limited and impractical for established crops due to pathogens soil borne nature. Concerns about fungicide resistant pathogens underscore the urgent need for sustainable and ecofriendly alternatives. A promising approach involves utilizing beneficial microbes as a substitute or complement to chemical management , . Beneficial microbes have the potential to combat pathogens and promote plant growth, offering valuable contributions to disease control and increased crop yields. Additionally, the success of biological control agents is often higher when they originate from the local environment, such as rhizosphere microbes and endophytes, compared to foreign microorganisms. Native microorganisms are well adapted to specific local conditions, including climate, soil characteristics, and soil microbiota. Notable examples of beneficial rhizosphere and endophytic microbes include Bacillus spp., Pseudomonas spp., and Trichoderma spp. In the rhizosphere, Trichoderma spp. act as effective biocontrol agents against soil borne pathogens, reducing F . udum populations and mitigating pigeonpea wilt through mechanisms like mycoparasitism, lytic enzyme production, nutrient competition, and the secretion of pathogen fighting secondary metabolites – . These interactions also impact plant biochemistry, leading to increased lignin deposition, higher phenol levels, and changes in enzyme activity in response to pathogen attacks . In both the rhizosphere and as endophytic bacteria, Bacillus spp. and Pseudomonas spp. employ various strategies to combat plant diseases, including antibiosis, lytic enzymes, resource competition, extracellular proteins, antifungal antibiotics, lipopeptides, siderophores, and hydrogen cyanide (HCN) production . Additionally, these bacteria enhance nutrient availability to plants by mobilizing essential minerals such as phosphorus, potassium, and zinc through the production of organic acids – . Furthermore, Bacillus spp. and Pseudomonas spp. utilize induced systemic resistance (ISR) as a crucial mechanism to protect plants from specific diseases , . ISR involves altering cell wall structure and producing phytoalexin rich glycoproteins, pathogenesis related (PR) proteins, and hydroxyproline rich glycoproteins Plant growth promoting rhizobacteria (PGPR) strains contribute by generating antioxidant enzymes such as peroxidase (POD), phenylalanine ammonia lyase (PAL), and polyphenol oxidase (PPO), which serve as triggers for ISR in plants . Peroxidase is essential for processes like lignification, suberization, and the synthesis of phenols and glycoproteins, strengthening the plant cell wall and preventing fungal invasion – . Phenylalanine ammonia lyase, the initial enzyme in the phenylpropanoid pathway, is involved in the production of phytoalexins, phenols, and lignin. Bacillus spp. and Pseudomonas spp. enhance chitinase, PAL, PPO, Superoxide dismutase, and β-1,3-glucanase activity while inhibiting the production of polymethyl galacturonase by F . udum in pigeonpea . In the context of our study, we highlight the importance of utilizing native biocontrol agents, both fungal and bacterial, isolated from the rhizosphere and within plant tissues. These native bioagents offer distinct advantages, as they are well adapted to local soil and climatic conditions. Fertile alluvial soils with high organic matter in Bihar soils favour the growth of bioagents that can effectively manage wilt diseases.
Seed material Pigeonpea seeds of different cultivars were obtained under AICRP on (All India Coordinated Research Project) Pigeonpea wilt programme from IIPR (Indian Institute of Pulse Research) Khanpur. Collection, isolation and characterization of the pathogen Pigeonpea plants exhibiting typical wilt symptoms were collected from highly susceptible cultivars (ICP2376 and BAHAR), moderately resistant cultivar (ICP 8862) and resistant cultivars (ICP8858 and ICP9174) at the AICRP on Pigeonpea wilt disease sick plot located at Tirhut College of Agriculture, Dholi (25° 59′ 41.9″ N latitude and 85° 35′ 43.3″ E longitude). Stem segments showing vascular discoloration were collected, surface sterilized [(70% alcohol (30 s), 1% sodium hypochlorite (30 s) and sterile distilled water (3 × 60 s)] inoculated to Potato Dextrose Agar (PDA) medium and then incubated at 25 ± 2 °C for 72 h . Colonies exhibiting growth with characteristic Fusarium morphology were selected, subcultured, and grown on PDA medium following the methods outlined by , . Cultural characteristics, such as growth rate, growth pattern, mycelial color, pigmentation, radial growth, and zonation, were recorded after an 8 day incubation period. Microconidia and macroconidia morphology were observed after 8 and 15 days of incubation, respectively. Pathogenicity test To study the pathogenicity and identity of the isolated fungus as Fusarium , Koch's postulates were conducted on the susceptible Pigeonpea cultivar ICP2376. Purified Fusarium cultures were grown in 250 mL conical flasks containing 100 g of sorghum grains, which were autoclaved at 121.8 °C under 15 lb pressure for 15 min. Following inoculation, the cultures were incubated for 15 days. The prepared inoculum was then mixed with sterilized sandy loamy soil at a 1:4 ratio (pathogen to soil, w/w) and placed in 15 cm diameter plastic pots. Pigeonpea seeds were subjected to surface sterilization with a sodium hypochlorite solution for 2 min, followed by three rinses with sterile distilled water. Each plastic pot accommodated 10 seedlings, with a group of pots without the pathogen serving as a control . Wilt symptoms were observed and documented 45 days after sowing. Percent Disease Incidence (PDI) was calculated by the formula [12pt]{minimal}
$${}= }{} 100$$ PDI = No of wilted plants Total no of plants × 100 Similarly, the Translation Elongation Factor 1-α gene (TEF1α) and Internal Transcribed Spacer region gene (ITS) of the Fusarium isolates were amplified, and the sequences were submitted to NCBI GenBank for further analysis and documentation. Collection and isolation of biocontrol agents Ten rhizosphere soil samples and plant samples were collected from the Samastipur and Muzaffarpur districts in Bihar, characterized by temperatures ranging from 20 to 40 °C and an annual average temperature of approximately 26 °C (Supplementary Fig. ). To isolate rhizobacteria and Trichoderma spp., 10 g of rhizosphere soil was mixed with 90 mL of sterile distilled water and serially diluted up to 10 –6 . From 10 –4 to 10 –6 dilutions, an aliquot of 0.1 mL soil microbial suspensions were evenly spread over Nutrient Agar, King’s B, and Trichoderma -specific medium (TSM) from Himedia Laboratories, India. Incubation was carried out at 28 ± 2 °C for bacteria and 25 ± 2 °C for Trichoderma spp. Distinct bacterial colonies, exhibiting diverse morphological characteristics, were chosen, purified, and preserved in a 20% glycerol solution for future use. Fungal colonies were examined for morphological differences under a compound microscope at 400 × magnification (Olympus, Cx-21i, Japan). Subsequently, individual colonies identified as Trichoderma spp. were subcultured and stored based on their morphological features. For isolating endophytic bacteria, healthy pigeonpea plants were harvested at the flowering stage. One gram stem samples underwent surface sterilization [70% alcohol (30 s), 1% sodium hypochlorite (30 s), sterile distilled water (3 × 60 s)], and were ground using a mortar and pestle in 9 mL of sterile water . The grounded samples were serially diluted to 10 –8 , and 0.1 mL aliquots from this dilution were plated on Nutrient agar and King’s B agar plates. Incubation was done at 28 ± 2 °C in a BOD incubator for 2–3 days. (Supplementary Fig. ). In vitro evaluation of fungal and bacterial biocontrol agents against F . udum The dual culture technique was employed to evaluate the antagonistic effects of bacterial and fungal isolates against F . udum isolated from Pigeonpea cultivar ICP 8858. For fungal evaluation, 5 mm mycelial discs of seven days old F . udum were positioned on one side of a petriplate, while 5 mm discs of seven day old Trichoderma spp. fungal cultures were placed on the opposite end. These plates were then incubated for seven days at 25 ± 2 °C with three replications, and control plates were also included. As for the bacterial evaluation, 5 mm mycelial discs of the test pathogen were positioned at the center of PDA medium plates. Bacterial cultures were streaked on all four sides of the pathogen disc in a square pattern. Subsequently, these plates were incubated at 28 ± 2 °C for 7 days. Observations were made regarding the radial growth of the test pathogens with or without the presence of the antagonist, and the percentage of inhibition was calculated using the methodology outlined by . The experiment was replicated for twice. [12pt]{minimal}
$${}= }-}{} 100$$ I = C - T C × 100 I is the Per cent inhibition over control. C is the Radial growth of pathogen in control (mm). T is the Radial growth of pathogen in treatment (mm). Molecular identification of fungal and bacterial biocontrol agents Based on their observed antagonistic activity, promising bacteria (Eb-8, Eb-11, Eb-13, Eb-21, Rb-4, Rb-11, Rb-14, Rb-18, and Rb-19) were selected and subjected to identification at the species level through 16S rRNA sequencing. Similarly, Trichoderma spp. were identified using TEF1α and ITS region gene sequencing. The CTAB method (Cetyl Trimethyl Ammonium Bromide), was utilized to extract total genomic DNA from both the bacteria and Trichoderma spp. Subsequently, the DNA pellet was dissolved in 50 μL of 1X TAE buffer, which consists of 10 mM Tris and 1 mM EDTA. DNA quantification was carried out on a 0.8% agarose gel, and purity was assessed by determining the A260/A280 ratio using a spectrophotometer. For amplifying the 16S rRNA gene of the bacterial isolates, forward primer (5′-GGATGAGCCHALGGCCTA-3′) and reverse primer (5′-CGGTGTGTACAAGGCCCGG-3′) were used. Subsequently, PCR reactions for Trichoderma spp. were performed using specific primer pairs, namely ITS for amplifying the Internal Transcribed Spacer region of Ribosomal DNA (ITS-rDNA) and Translation Elongation Factor 1-α gene (TEF1α). Eurofins Genomics in Bangalore, Karnataka, sequenced the amplified products using the Sanger sequencing method. Sequences were considered belonging to the same species when they were at least 99.7% identical, and those with at least 97.8% identity were classified as belonging to the same genus. Characterization and in vitro plant growth promoting activities of bacterial biocontrol agents Biochemical characterization A total of nine potential bacterial isolates, known for their antifungal properties against F . udum , underwent thorough biochemical characterization following the guidelines in Bergey's manual of determinative bacteriology. This involved a series of tests, including gram staining, amylase, catalase, oxidase, indole, methyl red, Voges–Proskauer, and citrate utilization tests . Plant growth promoting activities Cellulase production test The 24 h old bacterial isolates were inoculated on Carboxy Methyl Cellulose (CMC) agar medium plates and incubated at 28 °C for five days to allow the cellulase secretion. Following incubation, the agar medium was soaked in a congo red solution (1 per cent w/v) for 15 min. Subsequently, the congo red solution was drained and the plates were subjected to an additional treatment with 1 M NaCl for 15 min. The presence of a clearly identifiable hydrolysis zone indicated the degradation of cellulose . Siderophore production test CAS (Chrome Azurol S) media was prepared and spot inoculation of the bacterial isolates was done from the actively growing cultures. Colonies that displayed an orange halo zone after 3 days of incubation at 28 ± 2 °C were regarded as positive for siderophore production . HCN and ammonia production tests The method proposed was employed to assess the ability of bacteria to produce hydrogen cyanide. Each bacterium was streaked onto a nutrient agar medium containing 4.4 g/L of glycine. A Whatman no. 1 filter paper was placed over the agar, soaked in a specific solution (0.5% picric acid and 2% sodium carbonate w/v). The plates were sealed with parafilm and then incubated for 4 days at 36 ± 2 °C. The presence of an orange or red color indicated the formation of hydrogen cyanide. The 24 h old bacterial cultures were inoculated in 10 mL of peptone broth and incubated at 28 ± 2 °C for 48–72 h. Later, one mL of Nessler’s reagent was added to each tube and the development of yellow to dark brown colour was taken as a positive reaction. Based on the intensity of colour, the isolates were divided into four groups i.e., + , + + , + + + , + + + + . Phosphate, potassium, and zinc solubilization The qualitative assessment of phosphate, potassium, and zinc solubilization activities of the isolates was conducted using specific agar media. For phosphate solubilization, pure colonies were spot inoculated onto Pikovskaya’s agar plates and then incubated at 28 ± 2 °C for 5 days. The confirmation of phosphate solubilization was based on the formation of a distinct halo zone around the colony . Similarly, for potassium solubilization, isolates were spot inoculated onto Aleksandrov agar plates and incubated for 5 days. The presence of a clear halo zone around the colony indicated potassium solubilization . In the case of zinc solubilization, isolates were spot inoculated onto Tris minimal agar medium supplemented with zinc oxide and then incubated at 30 °C for 3 days. The confirmation of zinc solubilization relied on the formation of a clear halo zone around the colony . All experiments regarding biochemical tests Plant Growth Promoting Rhizobacteria (PGPR) activities were replicated for validation. Assessment of selected biocontrol agents against pigeonpea Fusarium wilt under pot conditions Rhizosphere bacteria (Rb-18) and endophytic bacteria (Eb-21), exhibiting positive antifungal and Plant Growth Promoting Rhizobacteria (PGPR) activities, along with Trichoderma spp. isolated from the Pigeonpea rhizosphere, were selected as biocontrol agents. The experiment utilized seeds of the pigeonpea wilt susceptible cultivar (ICP 2376). The experimental setup involved pot cultivation using sterilized pots measuring (20 × 15) cm. Each pot was filled with 5 kg of sterilized sandy loamy soil, and 10 surface sterilized seeds were sown for each treatment, with three replications. After 35 days of sowing, five pots were inoculated with a spore suspension of F . udum (50 mL of microconidial suspension containing 1 × 10 6 conidia/mL per pot). Among these, three pots were inoculated with a Trichoderma spp. spore suspension (6 mL) (1 × 10 6 spores/mL), and two pots with a bacterial suspension (10 mL of a suspension containing 10 8 cfu/mL) on the 45th day. Plants that were inoculated with the pathogen and those not treated with either the pathogen or biocontrol agents served as control groups. The greenhouse experiment was conducted under high humidity (≥ 90%) and optimal temperature conditions of 28–30 °C. Each treatment was replicated three times in a completely randomized design. The per cent disease incidences was calculated by the following formula [12pt]{minimal}
$$ {} = }.{}}}{{{}.\,{}}} 100 $$ PDI = No . of wilted plants Total no . of plants × 100 Activity of defence enzymes in biocontrol treated plants against Pigeonpea Fusarium wilt The study evaluated the activity of defense related enzymes, including peroxidase (POD), polyphenol oxidase (PPO), and phenylalanine ammonia lyase (PAL), in Pigeonpea plants treated with Trichoderma spp. and bacterial biocontrol agents when challenged with F . udum under potted conditions. Fresh leaves were collected randomly from each treatment at different time points: 0, 24, 48, 72 and 96 h after the inoculation with biocontrol agents. The leaf tissues were immersed in liquid nitrogen and homogenized in 10 mL of ice cold 50 mM potassium phosphate buffer (pH 6.8) containing 1 M NaCl, 1 mM EDTA, 1% polyvinyl pyrolidone and 10 mM β-mercaptoethanol. The samples were filtered using muslin cloth and centrifuged at 12,000 rpm at 4 °C for 25 min. The final supernatants were used for the assay of peroxidase and polyphenol oxidase enzymes. The standard assay protocol described by was followed for peroxidase and polyphenol oxidase. To determine PAL activity, 400 µL of sample extract was incubated with 0.5 mL of 0.1 M borate buffer pH 8.8 and 0.5 mL of 12 mM l-phenylalanine in the same buffer for 30 min at 30 °C. PAL activity was determined as the rate of conversion of l-phenylalanine to transcinnamic acid at 290 nm. The amount of trans-cinnamic acid synthesised was calculated using its extinction coefficient of 9630 M −1 cm −1 . Enzyme activity was expressed in fresh weight basis as nmol trans-cinnamic acid min −1 mg −1 of sample . Assessment of selected biocontrol agents against Pigeonpea Fusarium wilt under sick plot conditions The study was conducted at the AICRP on Pigeonpea wilt sick plot located at T.C. A Dholi, R.P.C.A.U (25° 59′ 41.9″ N 85° 35′ 43.3″ E), Pusa, Bihar. The experiment was carried out over four different seasons, which included Kharif 2021–2022, Rabi 2021–2022, Kharif 2022–2023, and Rabi 2022–2023. To ensure even distribution of the pathogen within the affected plots, four soil samples were taken from each season (3 m × 3 m) plot. These samples underwent a series of dilutions and were then plated on a specialized Fusarium medium following the method outlined by . The B . subtilis isolates were inoculated into nutrient broth, while P . aeruginosa isolates were introduced into KB broth. The cultures were then incubated at 28 ± 2 °C 28 ± 2 °C for 36 h on a rotary shaker set at 150 rpm. After incubation, the bacteria were collected through centrifugation at 8000 rpm for 10 m using a benchtop refrigerated centrifuge. The resulting pellets were washed three times with sterile distilled water (SDW) and the cell concentration was adjusted to 1 × 10 8 colony forming units (cfu) per millilitre through dilution, aiming for suspensions with an optical density of 0.45 at A610 nm, as determined by a UV–visible spectrophotometer (Mortensen, 1992). The Trichoderma spp. isolates were cultured on PDA plates for 10–12 days at 28 ± 2 °C. Subsequently, 10 mL of sterile distilled water (SDW) was added to each plate, and conidia were gently detached from the culture surface by shaking. The remaining conidia were removed using a sterile brush, and the resulting suspension was collected in a 100 mL conical flask. After passing the conidial suspension through four layers of cheesecloth, it was centrifuged at 2500 rpm for 10 min and then resuspended in distilled water. The conidial concentration was adjusted to 1 × 10 6 conidia per millilitre using a hemocytometer. Pigeonpea seeds of wilt susceptible cultivar ICP8863 were soaked in a culture suspension with the addition of 0.2% carboxymethyl cellulose (CMC) to aid in the attachment of the biocontrol agent to the seeds. These treated seeds were then incubated at 28 ± 2 °C in a rotary shaker at 150 rpm for 6 h and subsequently air dried under sterile conditions. While carnbendizim was treated as 2.0 mg/g seeds. As a control, seeds soaked in distilled water amended with 0.2% CMC were used. These treated seeds were manually sown in wilt affected plots with a spacing of 90 cm between rows and 20 cm within rows, at a depth of 2–3 cm. The experimental design followed a randomized block pattern with seven treatments, each replicated. Each replication occupied a 3 m × 3 m plot, totalling an area of 9 square meters. The incidence of wilt was assessed 65 days after sowing. The per cent disease incidences was calculated by the following formula [12pt]{minimal}
$$ {} = }.{}}}{{{}.{}}} 100 $$ PDI = No . of wilted plants Total no . of plants × 100 AMMI analysis In this study, the performance of seven Treatments (T) and their interactions with four Environments (E) were assessed. Disease incidence data collected from the treatments were organized to be compatible with the AMMI (Additive Main Effects and Multiplicative Interaction) models. The AMMI statistical model, along with computational methods detailed in , was employed for the analysis. An analysis of variance was conducted to partition the variation into main effects associated with the Treatments (T) and the Environments (E), as well as the interaction effect between Treatments and Environments (T × E). These analyses were carried out using the GEA-R software developed by 'CIMMYT' and the 'R' package Agricolae. Ethical statement All authors have approved the manuscript and agreed with its submission to the Scientific Reports. The submitted work is original and has not been submitted or published elsewhere. The manuscript has been prepared following principles of ethical and professional conduct. The study does not involve human participants or animals. IUCN policy statement The experimental research and field studies on plants, both cultivated and wild, strictly followed institutional, national, and international guidelines, including the IUCN Policy Statement on Research Involving Species at Risk of Extinction and the Convention on the Trade in Endangered Species of Wild Fauna and Flora. Emphasizing our commitment to ethical research, no endangered species of wild fauna and flora were involved, reflecting our dedication to biodiversity conservation and minimizing adverse impacts on vulnerable plant populations. This comprehensive compliance aims to advance scientific knowledge while championing environmental sustainability and global biodiversity preservation, upholding the highest standards of research integrity for the well-being of ecosystems and future generations.
Pigeonpea seeds of different cultivars were obtained under AICRP on (All India Coordinated Research Project) Pigeonpea wilt programme from IIPR (Indian Institute of Pulse Research) Khanpur.
Pigeonpea plants exhibiting typical wilt symptoms were collected from highly susceptible cultivars (ICP2376 and BAHAR), moderately resistant cultivar (ICP 8862) and resistant cultivars (ICP8858 and ICP9174) at the AICRP on Pigeonpea wilt disease sick plot located at Tirhut College of Agriculture, Dholi (25° 59′ 41.9″ N latitude and 85° 35′ 43.3″ E longitude). Stem segments showing vascular discoloration were collected, surface sterilized [(70% alcohol (30 s), 1% sodium hypochlorite (30 s) and sterile distilled water (3 × 60 s)] inoculated to Potato Dextrose Agar (PDA) medium and then incubated at 25 ± 2 °C for 72 h . Colonies exhibiting growth with characteristic Fusarium morphology were selected, subcultured, and grown on PDA medium following the methods outlined by , . Cultural characteristics, such as growth rate, growth pattern, mycelial color, pigmentation, radial growth, and zonation, were recorded after an 8 day incubation period. Microconidia and macroconidia morphology were observed after 8 and 15 days of incubation, respectively.
To study the pathogenicity and identity of the isolated fungus as Fusarium , Koch's postulates were conducted on the susceptible Pigeonpea cultivar ICP2376. Purified Fusarium cultures were grown in 250 mL conical flasks containing 100 g of sorghum grains, which were autoclaved at 121.8 °C under 15 lb pressure for 15 min. Following inoculation, the cultures were incubated for 15 days. The prepared inoculum was then mixed with sterilized sandy loamy soil at a 1:4 ratio (pathogen to soil, w/w) and placed in 15 cm diameter plastic pots. Pigeonpea seeds were subjected to surface sterilization with a sodium hypochlorite solution for 2 min, followed by three rinses with sterile distilled water. Each plastic pot accommodated 10 seedlings, with a group of pots without the pathogen serving as a control . Wilt symptoms were observed and documented 45 days after sowing. Percent Disease Incidence (PDI) was calculated by the formula [12pt]{minimal}
$${}= }{} 100$$ PDI = No of wilted plants Total no of plants × 100 Similarly, the Translation Elongation Factor 1-α gene (TEF1α) and Internal Transcribed Spacer region gene (ITS) of the Fusarium isolates were amplified, and the sequences were submitted to NCBI GenBank for further analysis and documentation.
Ten rhizosphere soil samples and plant samples were collected from the Samastipur and Muzaffarpur districts in Bihar, characterized by temperatures ranging from 20 to 40 °C and an annual average temperature of approximately 26 °C (Supplementary Fig. ). To isolate rhizobacteria and Trichoderma spp., 10 g of rhizosphere soil was mixed with 90 mL of sterile distilled water and serially diluted up to 10 –6 . From 10 –4 to 10 –6 dilutions, an aliquot of 0.1 mL soil microbial suspensions were evenly spread over Nutrient Agar, King’s B, and Trichoderma -specific medium (TSM) from Himedia Laboratories, India. Incubation was carried out at 28 ± 2 °C for bacteria and 25 ± 2 °C for Trichoderma spp. Distinct bacterial colonies, exhibiting diverse morphological characteristics, were chosen, purified, and preserved in a 20% glycerol solution for future use. Fungal colonies were examined for morphological differences under a compound microscope at 400 × magnification (Olympus, Cx-21i, Japan). Subsequently, individual colonies identified as Trichoderma spp. were subcultured and stored based on their morphological features. For isolating endophytic bacteria, healthy pigeonpea plants were harvested at the flowering stage. One gram stem samples underwent surface sterilization [70% alcohol (30 s), 1% sodium hypochlorite (30 s), sterile distilled water (3 × 60 s)], and were ground using a mortar and pestle in 9 mL of sterile water . The grounded samples were serially diluted to 10 –8 , and 0.1 mL aliquots from this dilution were plated on Nutrient agar and King’s B agar plates. Incubation was done at 28 ± 2 °C in a BOD incubator for 2–3 days. (Supplementary Fig. ).
F . udum The dual culture technique was employed to evaluate the antagonistic effects of bacterial and fungal isolates against F . udum isolated from Pigeonpea cultivar ICP 8858. For fungal evaluation, 5 mm mycelial discs of seven days old F . udum were positioned on one side of a petriplate, while 5 mm discs of seven day old Trichoderma spp. fungal cultures were placed on the opposite end. These plates were then incubated for seven days at 25 ± 2 °C with three replications, and control plates were also included. As for the bacterial evaluation, 5 mm mycelial discs of the test pathogen were positioned at the center of PDA medium plates. Bacterial cultures were streaked on all four sides of the pathogen disc in a square pattern. Subsequently, these plates were incubated at 28 ± 2 °C for 7 days. Observations were made regarding the radial growth of the test pathogens with or without the presence of the antagonist, and the percentage of inhibition was calculated using the methodology outlined by . The experiment was replicated for twice. [12pt]{minimal}
$${}= }-}{} 100$$ I = C - T C × 100 I is the Per cent inhibition over control. C is the Radial growth of pathogen in control (mm). T is the Radial growth of pathogen in treatment (mm).
Based on their observed antagonistic activity, promising bacteria (Eb-8, Eb-11, Eb-13, Eb-21, Rb-4, Rb-11, Rb-14, Rb-18, and Rb-19) were selected and subjected to identification at the species level through 16S rRNA sequencing. Similarly, Trichoderma spp. were identified using TEF1α and ITS region gene sequencing. The CTAB method (Cetyl Trimethyl Ammonium Bromide), was utilized to extract total genomic DNA from both the bacteria and Trichoderma spp. Subsequently, the DNA pellet was dissolved in 50 μL of 1X TAE buffer, which consists of 10 mM Tris and 1 mM EDTA. DNA quantification was carried out on a 0.8% agarose gel, and purity was assessed by determining the A260/A280 ratio using a spectrophotometer. For amplifying the 16S rRNA gene of the bacterial isolates, forward primer (5′-GGATGAGCCHALGGCCTA-3′) and reverse primer (5′-CGGTGTGTACAAGGCCCGG-3′) were used. Subsequently, PCR reactions for Trichoderma spp. were performed using specific primer pairs, namely ITS for amplifying the Internal Transcribed Spacer region of Ribosomal DNA (ITS-rDNA) and Translation Elongation Factor 1-α gene (TEF1α). Eurofins Genomics in Bangalore, Karnataka, sequenced the amplified products using the Sanger sequencing method. Sequences were considered belonging to the same species when they were at least 99.7% identical, and those with at least 97.8% identity were classified as belonging to the same genus.
Biochemical characterization A total of nine potential bacterial isolates, known for their antifungal properties against F . udum , underwent thorough biochemical characterization following the guidelines in Bergey's manual of determinative bacteriology. This involved a series of tests, including gram staining, amylase, catalase, oxidase, indole, methyl red, Voges–Proskauer, and citrate utilization tests . Plant growth promoting activities Cellulase production test The 24 h old bacterial isolates were inoculated on Carboxy Methyl Cellulose (CMC) agar medium plates and incubated at 28 °C for five days to allow the cellulase secretion. Following incubation, the agar medium was soaked in a congo red solution (1 per cent w/v) for 15 min. Subsequently, the congo red solution was drained and the plates were subjected to an additional treatment with 1 M NaCl for 15 min. The presence of a clearly identifiable hydrolysis zone indicated the degradation of cellulose . Siderophore production test CAS (Chrome Azurol S) media was prepared and spot inoculation of the bacterial isolates was done from the actively growing cultures. Colonies that displayed an orange halo zone after 3 days of incubation at 28 ± 2 °C were regarded as positive for siderophore production . HCN and ammonia production tests The method proposed was employed to assess the ability of bacteria to produce hydrogen cyanide. Each bacterium was streaked onto a nutrient agar medium containing 4.4 g/L of glycine. A Whatman no. 1 filter paper was placed over the agar, soaked in a specific solution (0.5% picric acid and 2% sodium carbonate w/v). The plates were sealed with parafilm and then incubated for 4 days at 36 ± 2 °C. The presence of an orange or red color indicated the formation of hydrogen cyanide. The 24 h old bacterial cultures were inoculated in 10 mL of peptone broth and incubated at 28 ± 2 °C for 48–72 h. Later, one mL of Nessler’s reagent was added to each tube and the development of yellow to dark brown colour was taken as a positive reaction. Based on the intensity of colour, the isolates were divided into four groups i.e., + , + + , + + + , + + + + . Phosphate, potassium, and zinc solubilization The qualitative assessment of phosphate, potassium, and zinc solubilization activities of the isolates was conducted using specific agar media. For phosphate solubilization, pure colonies were spot inoculated onto Pikovskaya’s agar plates and then incubated at 28 ± 2 °C for 5 days. The confirmation of phosphate solubilization was based on the formation of a distinct halo zone around the colony . Similarly, for potassium solubilization, isolates were spot inoculated onto Aleksandrov agar plates and incubated for 5 days. The presence of a clear halo zone around the colony indicated potassium solubilization . In the case of zinc solubilization, isolates were spot inoculated onto Tris minimal agar medium supplemented with zinc oxide and then incubated at 30 °C for 3 days. The confirmation of zinc solubilization relied on the formation of a clear halo zone around the colony . All experiments regarding biochemical tests Plant Growth Promoting Rhizobacteria (PGPR) activities were replicated for validation.
A total of nine potential bacterial isolates, known for their antifungal properties against F . udum , underwent thorough biochemical characterization following the guidelines in Bergey's manual of determinative bacteriology. This involved a series of tests, including gram staining, amylase, catalase, oxidase, indole, methyl red, Voges–Proskauer, and citrate utilization tests .
Cellulase production test The 24 h old bacterial isolates were inoculated on Carboxy Methyl Cellulose (CMC) agar medium plates and incubated at 28 °C for five days to allow the cellulase secretion. Following incubation, the agar medium was soaked in a congo red solution (1 per cent w/v) for 15 min. Subsequently, the congo red solution was drained and the plates were subjected to an additional treatment with 1 M NaCl for 15 min. The presence of a clearly identifiable hydrolysis zone indicated the degradation of cellulose . Siderophore production test CAS (Chrome Azurol S) media was prepared and spot inoculation of the bacterial isolates was done from the actively growing cultures. Colonies that displayed an orange halo zone after 3 days of incubation at 28 ± 2 °C were regarded as positive for siderophore production . HCN and ammonia production tests The method proposed was employed to assess the ability of bacteria to produce hydrogen cyanide. Each bacterium was streaked onto a nutrient agar medium containing 4.4 g/L of glycine. A Whatman no. 1 filter paper was placed over the agar, soaked in a specific solution (0.5% picric acid and 2% sodium carbonate w/v). The plates were sealed with parafilm and then incubated for 4 days at 36 ± 2 °C. The presence of an orange or red color indicated the formation of hydrogen cyanide. The 24 h old bacterial cultures were inoculated in 10 mL of peptone broth and incubated at 28 ± 2 °C for 48–72 h. Later, one mL of Nessler’s reagent was added to each tube and the development of yellow to dark brown colour was taken as a positive reaction. Based on the intensity of colour, the isolates were divided into four groups i.e., + , + + , + + + , + + + + . Phosphate, potassium, and zinc solubilization The qualitative assessment of phosphate, potassium, and zinc solubilization activities of the isolates was conducted using specific agar media. For phosphate solubilization, pure colonies were spot inoculated onto Pikovskaya’s agar plates and then incubated at 28 ± 2 °C for 5 days. The confirmation of phosphate solubilization was based on the formation of a distinct halo zone around the colony . Similarly, for potassium solubilization, isolates were spot inoculated onto Aleksandrov agar plates and incubated for 5 days. The presence of a clear halo zone around the colony indicated potassium solubilization . In the case of zinc solubilization, isolates were spot inoculated onto Tris minimal agar medium supplemented with zinc oxide and then incubated at 30 °C for 3 days. The confirmation of zinc solubilization relied on the formation of a clear halo zone around the colony . All experiments regarding biochemical tests Plant Growth Promoting Rhizobacteria (PGPR) activities were replicated for validation.
The 24 h old bacterial isolates were inoculated on Carboxy Methyl Cellulose (CMC) agar medium plates and incubated at 28 °C for five days to allow the cellulase secretion. Following incubation, the agar medium was soaked in a congo red solution (1 per cent w/v) for 15 min. Subsequently, the congo red solution was drained and the plates were subjected to an additional treatment with 1 M NaCl for 15 min. The presence of a clearly identifiable hydrolysis zone indicated the degradation of cellulose .
CAS (Chrome Azurol S) media was prepared and spot inoculation of the bacterial isolates was done from the actively growing cultures. Colonies that displayed an orange halo zone after 3 days of incubation at 28 ± 2 °C were regarded as positive for siderophore production .
The method proposed was employed to assess the ability of bacteria to produce hydrogen cyanide. Each bacterium was streaked onto a nutrient agar medium containing 4.4 g/L of glycine. A Whatman no. 1 filter paper was placed over the agar, soaked in a specific solution (0.5% picric acid and 2% sodium carbonate w/v). The plates were sealed with parafilm and then incubated for 4 days at 36 ± 2 °C. The presence of an orange or red color indicated the formation of hydrogen cyanide. The 24 h old bacterial cultures were inoculated in 10 mL of peptone broth and incubated at 28 ± 2 °C for 48–72 h. Later, one mL of Nessler’s reagent was added to each tube and the development of yellow to dark brown colour was taken as a positive reaction. Based on the intensity of colour, the isolates were divided into four groups i.e., + , + + , + + + , + + + + .
The qualitative assessment of phosphate, potassium, and zinc solubilization activities of the isolates was conducted using specific agar media. For phosphate solubilization, pure colonies were spot inoculated onto Pikovskaya’s agar plates and then incubated at 28 ± 2 °C for 5 days. The confirmation of phosphate solubilization was based on the formation of a distinct halo zone around the colony . Similarly, for potassium solubilization, isolates were spot inoculated onto Aleksandrov agar plates and incubated for 5 days. The presence of a clear halo zone around the colony indicated potassium solubilization . In the case of zinc solubilization, isolates were spot inoculated onto Tris minimal agar medium supplemented with zinc oxide and then incubated at 30 °C for 3 days. The confirmation of zinc solubilization relied on the formation of a clear halo zone around the colony . All experiments regarding biochemical tests Plant Growth Promoting Rhizobacteria (PGPR) activities were replicated for validation.
Fusarium wilt under pot conditions Rhizosphere bacteria (Rb-18) and endophytic bacteria (Eb-21), exhibiting positive antifungal and Plant Growth Promoting Rhizobacteria (PGPR) activities, along with Trichoderma spp. isolated from the Pigeonpea rhizosphere, were selected as biocontrol agents. The experiment utilized seeds of the pigeonpea wilt susceptible cultivar (ICP 2376). The experimental setup involved pot cultivation using sterilized pots measuring (20 × 15) cm. Each pot was filled with 5 kg of sterilized sandy loamy soil, and 10 surface sterilized seeds were sown for each treatment, with three replications. After 35 days of sowing, five pots were inoculated with a spore suspension of F . udum (50 mL of microconidial suspension containing 1 × 10 6 conidia/mL per pot). Among these, three pots were inoculated with a Trichoderma spp. spore suspension (6 mL) (1 × 10 6 spores/mL), and two pots with a bacterial suspension (10 mL of a suspension containing 10 8 cfu/mL) on the 45th day. Plants that were inoculated with the pathogen and those not treated with either the pathogen or biocontrol agents served as control groups. The greenhouse experiment was conducted under high humidity (≥ 90%) and optimal temperature conditions of 28–30 °C. Each treatment was replicated three times in a completely randomized design. The per cent disease incidences was calculated by the following formula [12pt]{minimal}
$$ {} = }.{}}}{{{}.\,{}}} 100 $$ PDI = No . of wilted plants Total no . of plants × 100
Fusarium wilt The study evaluated the activity of defense related enzymes, including peroxidase (POD), polyphenol oxidase (PPO), and phenylalanine ammonia lyase (PAL), in Pigeonpea plants treated with Trichoderma spp. and bacterial biocontrol agents when challenged with F . udum under potted conditions. Fresh leaves were collected randomly from each treatment at different time points: 0, 24, 48, 72 and 96 h after the inoculation with biocontrol agents. The leaf tissues were immersed in liquid nitrogen and homogenized in 10 mL of ice cold 50 mM potassium phosphate buffer (pH 6.8) containing 1 M NaCl, 1 mM EDTA, 1% polyvinyl pyrolidone and 10 mM β-mercaptoethanol. The samples were filtered using muslin cloth and centrifuged at 12,000 rpm at 4 °C for 25 min. The final supernatants were used for the assay of peroxidase and polyphenol oxidase enzymes. The standard assay protocol described by was followed for peroxidase and polyphenol oxidase. To determine PAL activity, 400 µL of sample extract was incubated with 0.5 mL of 0.1 M borate buffer pH 8.8 and 0.5 mL of 12 mM l-phenylalanine in the same buffer for 30 min at 30 °C. PAL activity was determined as the rate of conversion of l-phenylalanine to transcinnamic acid at 290 nm. The amount of trans-cinnamic acid synthesised was calculated using its extinction coefficient of 9630 M −1 cm −1 . Enzyme activity was expressed in fresh weight basis as nmol trans-cinnamic acid min −1 mg −1 of sample .
Fusarium wilt under sick plot conditions The study was conducted at the AICRP on Pigeonpea wilt sick plot located at T.C. A Dholi, R.P.C.A.U (25° 59′ 41.9″ N 85° 35′ 43.3″ E), Pusa, Bihar. The experiment was carried out over four different seasons, which included Kharif 2021–2022, Rabi 2021–2022, Kharif 2022–2023, and Rabi 2022–2023. To ensure even distribution of the pathogen within the affected plots, four soil samples were taken from each season (3 m × 3 m) plot. These samples underwent a series of dilutions and were then plated on a specialized Fusarium medium following the method outlined by . The B . subtilis isolates were inoculated into nutrient broth, while P . aeruginosa isolates were introduced into KB broth. The cultures were then incubated at 28 ± 2 °C 28 ± 2 °C for 36 h on a rotary shaker set at 150 rpm. After incubation, the bacteria were collected through centrifugation at 8000 rpm for 10 m using a benchtop refrigerated centrifuge. The resulting pellets were washed three times with sterile distilled water (SDW) and the cell concentration was adjusted to 1 × 10 8 colony forming units (cfu) per millilitre through dilution, aiming for suspensions with an optical density of 0.45 at A610 nm, as determined by a UV–visible spectrophotometer (Mortensen, 1992). The Trichoderma spp. isolates were cultured on PDA plates for 10–12 days at 28 ± 2 °C. Subsequently, 10 mL of sterile distilled water (SDW) was added to each plate, and conidia were gently detached from the culture surface by shaking. The remaining conidia were removed using a sterile brush, and the resulting suspension was collected in a 100 mL conical flask. After passing the conidial suspension through four layers of cheesecloth, it was centrifuged at 2500 rpm for 10 min and then resuspended in distilled water. The conidial concentration was adjusted to 1 × 10 6 conidia per millilitre using a hemocytometer. Pigeonpea seeds of wilt susceptible cultivar ICP8863 were soaked in a culture suspension with the addition of 0.2% carboxymethyl cellulose (CMC) to aid in the attachment of the biocontrol agent to the seeds. These treated seeds were then incubated at 28 ± 2 °C in a rotary shaker at 150 rpm for 6 h and subsequently air dried under sterile conditions. While carnbendizim was treated as 2.0 mg/g seeds. As a control, seeds soaked in distilled water amended with 0.2% CMC were used. These treated seeds were manually sown in wilt affected plots with a spacing of 90 cm between rows and 20 cm within rows, at a depth of 2–3 cm. The experimental design followed a randomized block pattern with seven treatments, each replicated. Each replication occupied a 3 m × 3 m plot, totalling an area of 9 square meters. The incidence of wilt was assessed 65 days after sowing. The per cent disease incidences was calculated by the following formula [12pt]{minimal}
$$ {} = }.{}}}{{{}.{}}} 100 $$ PDI = No . of wilted plants Total no . of plants × 100
In this study, the performance of seven Treatments (T) and their interactions with four Environments (E) were assessed. Disease incidence data collected from the treatments were organized to be compatible with the AMMI (Additive Main Effects and Multiplicative Interaction) models. The AMMI statistical model, along with computational methods detailed in , was employed for the analysis. An analysis of variance was conducted to partition the variation into main effects associated with the Treatments (T) and the Environments (E), as well as the interaction effect between Treatments and Environments (T × E). These analyses were carried out using the GEA-R software developed by 'CIMMYT' and the 'R' package Agricolae.
All authors have approved the manuscript and agreed with its submission to the Scientific Reports. The submitted work is original and has not been submitted or published elsewhere. The manuscript has been prepared following principles of ethical and professional conduct. The study does not involve human participants or animals.
The experimental research and field studies on plants, both cultivated and wild, strictly followed institutional, national, and international guidelines, including the IUCN Policy Statement on Research Involving Species at Risk of Extinction and the Convention on the Trade in Endangered Species of Wild Fauna and Flora. Emphasizing our commitment to ethical research, no endangered species of wild fauna and flora were involved, reflecting our dedication to biodiversity conservation and minimizing adverse impacts on vulnerable plant populations. This comprehensive compliance aims to advance scientific knowledge while championing environmental sustainability and global biodiversity preservation, upholding the highest standards of research integrity for the well-being of ecosystems and future generations.
Morphological, pathogenic and molecular characterisation of the pathogen In the present study, a total of five Fusarium isolates were obtained, each originating from a distinct Pigeonpea cultivar (ICP 2376, BAHAR, ICP 8862, ICP 8858, and ICP 9174). The cultural and morphological traits of these Fusarium isolates were investigated on PDA, revealing notable differences in colony texture, substrate pigmentation, mycelial color, and conidia length and width (Supplementary Fig. ). All Fusarium isolates exhibited pathogenicity in causing wilt disease during the pathogenicity test, with an incidence ranging from 60 to 90%. Notably, the Fusarium isolate obtained from the ICP 8858 cultivar demonstrated the highest disease incidence at 90%, indicating its virulence and was subsequently chosen for further antagonistic investigations. To molecularly characterize these isolates, PCR amplification of the ITS-rDNA region using universal primers yielded amplicons ranging from 500 to 550 bp in length. Additionally, an analysis of nucleotide sequences of the TEF1α gene revealed variations in length, ranging from 670 to 725 base pairs among the five Fusarium isolates. Subsequently, all sequences were submitted to the NCBI GenBank, and accession numbers were obtained for reference and documentation purposes (Table ) (Fig. ). Isolation of beneficial microbes In our present study, based on cultural and morphological traits a total of 100 endophytic and 100 rhizosphere bacteria were isolated, purified and evaluated for antagonistic activity against F. udum . Simultaneously, we isolated three Trichoderma strains from 10 rhizosphere soil samples and compared them to the Trichoderma Taxonomy database https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=5543 using criteria like conidiospore color and pigment secretion on the PDA medium. Subsequent microscopic examination confirmed the presence of three isolates: T . harzianium , T . asperellum , and an unidentified Trichoderma species. Importantly, two of these isolates, T . harzianium and T . asperellum , were categorized within the Harzianum clade and Hamatum sub branch, respectively, while the third isolate, Trichoderma sp., could not be conclusively identified. In vitro evaluation of biocontrol agents against F . udum In the dual culture technique, it was noted that among the tested bacterial isolates, four endophytic and five rhizosphere isolates effectively inhibited the growth of F . udum by more than 60%. Specifically, the endophytic bacterial strains identified as Eb-21, Eb-13, Eb-8, and Eb-11 exhibited inhibition percentages of 72.22%, 65.11%, 64.44%, and 62.88%, respectively. In contrast, rhizosphere bacteria labeled as Rb-18, Rb-14, Rb-19, Rb-4, and Rb-11 exhibited inhibition percentages of 71.11%, 68.44%, 65.3%, 64.8%, and 62.11%, respectively (Fig. ). T . harzianum , T . asperellum , and Trichoderma sp. exhibited inhibition percentages of 65%, 60%, and 55%, respectively, against F . udum . Molecular based identification of bacterial and fungal isolates Based on their antifungal characteristics, nine bacterial strains and three Trichoderma species were selected for molecular identification. The Polymerase Chain Reaction (PCR) method was utilized to amplify fragments of the bacterial 16S rRNA gene. Subsequently, the obtained 16S rRNA gene sequences were compared against the NCBI nucleotide database using the Basic Local Alignment Search Tool (BLAST). The results of this comparison led to the identification of the isolates as follows: Rb-4 ( Bacillus sp.), Rb-11 ( B . subtilis ), Rb-14 ( B . megaterium ), Rb-18 ( B . subtilis ), Rb-19 ( B . velezensis ), Eb-8 ( Bacillus sp.), Eb-11 ( B . subtilis ), Eb-13 ( P . aeruginosa ), and Eb-21 ( P . aeruginosa ). The genetic sequences were subsequently deposited into the NCBI GenBank, and specific accession numbers were obtained (Fig. , Table ). Similarly, for the Trichoderma isolates, BLAST analysis was employed to compare their fungal TEF (Translation Elongation Factor 1-α gene) and small ribosomal gene (18S rRNA gene) sequences with existing Trichoderma sequences in the NCBI database. The BLAST analysis confirmed that the amplified TEF and ITS gene sequences from the Trichoderma isolates showed similarity to known Trichoderma species. Consequently, the sequences were submitted to the NCBI GenBank, securing accession numbers: ITS (MZ348898) TEF (PP060450) for T . harzianum , ITS (MZ411690) TEF (PP060451) for T . asperellum , and ITS (MZ411691) TEF (PP060452) for Trichoderma sp. (Fig. ). Biochemical characterization of bacterial isolates Bacterial isolates that demonstrated inhibitory effects on F . udum in dual culture experiments underwent biochemical characterization. Among these isolates, all tested positive for the catalase test, seven displayed a positive gram reaction, six exhibited positive results for amylase and oxidase tests and two indicated positive outcomes for citrate utilization and methyl red reduction tests. However, none of the isolates showed a positive result in the indole production test (Table ). In vitro plant growth promoting activities A total of nine potential bacterial isolates, which exhibited inhibitory effects against F . udum in a dual culture technique, underwent in vitro assessment for their growth promoting activities. The cellulase activity of these potential bacterial isolates was evaluated using CMC agar media. The presence of a halo zone around the colony was considered a positive outcome for this test, and variations were observed among the isolates. Specifically, four isolates, namely Eb-8, Eb-21, Rb-14, and Rb-18, exhibited halo zones around their colonies. None of the isolates showed hydrogen cyanide (HCN) production. Interestingly, it was noted that the rhizosphere bacterial population (Rb-18) displayed a higher capacity for siderophore production compared to the endophytic bacteria (Eb-21). Ammonia production was recorded in three isolates Eb-21, Rb-11 and Rb-18. Additionally, bacterial isolates demonstrating the ability to solubilize inorganic phosphate, potassium, and zinc were assessed based on the formation of clear halo zones in Pikovaskaya’s, Aleksandrov, and Trisminimal agar plates, respectively. In Pikovaskaya’s medium, isolates Eb-21, Rb-14, and Rb-18 exhibited the formation of halo zones (Supplementary Fig. ). Similarly, on Aleksandrov agar plates, Rb-11 and Rb-18 displayed a halo zone, and on zinc supplemented Trisminimal agar, Eb-21, Rb-11, Rb-14, and Rb-18 exhibited halo zones (Table ). Assessment of selected biocontrol agents against Pigeonpea Fusarium wilt under pot conditions The potted plants experiment aimed to evaluate the effectiveness of various biocontrol agents, namely B . subtilis , P . aeruginosa , T . harzianum , T . asperellum , and Trichoderma sp., in reducing Fusarium wilt in Pigeonpea. The disease incidence in the control group without any treatment (T6) was high at 93.33%. However, the treatment involving P . aeruginosa and F . udum (T2) exhibited the lowest disease incidence at 20%. This was followed by the treatments with T . harzianum + F . udum (T3) at 21.66%, B . subtilis + F . udum (T1) at 23.33%, T . asperellum + F . udum (T4) at 26.66%, and Trichoderma sp. + F . udum (T5) at 29.33% (Table ). Activity of defence enzymes in biocontrol treated plants against Pigeonpea Fusarium wilt In this study, the enzymes associated with plant induced systemic resistance (ISR), including peroxidase (POD), polyphenol oxidase (PPO) and phenylalanine ammonia lyase (PAL), were investigated in vitro. Prospective biocontrol bacteria and Trichoderma spp. isolates were introduced to the plants. The results of the study showed that the highest levels of peroxidase (POD) and polyphenol oxidase (PPO) activity were observed in plants treated with P . aeruginosa + F . udum (1.53) (POD), 1.53 (PPO) and (27) (PAL)) followed by B . subtilis + F . udum and T. harzanium + F. udum . Notably, the POD, PPO, and PAL activity levels were significantly higher in plants treated with bacteria compared to those treated with fungi. Enzyme activity showed a notable increase in all treatments, peaking at 72 h before gradually declining. Control plants, which were neither exposed to the pathogen nor the biocontrol agents, exhibited consistent enzyme activity levels across all time intervals. In contrast, plants treated with the pathogen did not display any significant POD, PPO, or PAL activity when compared to plants treated with the biocontrol agents (Supplementary Figs. , , ). Assessment of selected biocontrol agents against pigeonpea Fusarium wilt under sick plot conditions The potential fungal and biocontrol agents were applied as seed treatments on the wilt susceptible cultivar ICP2376 and evaluated for their effectiveness against pigeonpea wilt in sick plots over four seasons (2021–2022 Kharif, 2021–2022 Rabi, 2022–2023 Kharif, 2022–2023 Rabi). In all treatments during these four seasons, the lowest mean incidence of the disease was observed in T2 (33.33) ( P . aeruginosa ) and T3 (35.41) ( T . harzanium ) followed by T6 (36.5) (Carbendizim), T1 (36.66) ( B . subtilis) , T4 (52.91) ( T . asperellum ) and T5 (53.33) ( Trichoderma sp.) (Table ; Fig. ). AMMI ANNOVA ANOVA of seven Treatments (T) over four Environments (E) showed that 0.24% of the total SS was attributed to Environments (E) effect; 95.08.% to Treatments (T) effects and 0.88% to Treatments by Environments (T × E) interaction effects. The T × E was further divided into Interaction Principal Component Axis (IPCA) and residuals, in which IPCA1 has contributed (49.01%) of interaction SS followed by IPCA2 which contributed (37.03%) of interaction SS and IPCA1 and IPCA2 cumulatively contributed to (97.411%) of the total interaction (Table ). AMMI 1 Biplot display The AMMI1 biplot was employed to analyze the average disease incidence and IPCA1 scores of seven treatments in four different environments. It revealed that treatments on the left side of the perpendicular line exhibited lower disease incidence, with T2 having the lowest, followed by T3 and T1. Conversely, treatments on the right side of the perpendicular line displayed higher disease incidence, with T6 having a particularly higher incidence (Fig. ).
In the present study, a total of five Fusarium isolates were obtained, each originating from a distinct Pigeonpea cultivar (ICP 2376, BAHAR, ICP 8862, ICP 8858, and ICP 9174). The cultural and morphological traits of these Fusarium isolates were investigated on PDA, revealing notable differences in colony texture, substrate pigmentation, mycelial color, and conidia length and width (Supplementary Fig. ). All Fusarium isolates exhibited pathogenicity in causing wilt disease during the pathogenicity test, with an incidence ranging from 60 to 90%. Notably, the Fusarium isolate obtained from the ICP 8858 cultivar demonstrated the highest disease incidence at 90%, indicating its virulence and was subsequently chosen for further antagonistic investigations. To molecularly characterize these isolates, PCR amplification of the ITS-rDNA region using universal primers yielded amplicons ranging from 500 to 550 bp in length. Additionally, an analysis of nucleotide sequences of the TEF1α gene revealed variations in length, ranging from 670 to 725 base pairs among the five Fusarium isolates. Subsequently, all sequences were submitted to the NCBI GenBank, and accession numbers were obtained for reference and documentation purposes (Table ) (Fig. ).
In our present study, based on cultural and morphological traits a total of 100 endophytic and 100 rhizosphere bacteria were isolated, purified and evaluated for antagonistic activity against F. udum . Simultaneously, we isolated three Trichoderma strains from 10 rhizosphere soil samples and compared them to the Trichoderma Taxonomy database https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=5543 using criteria like conidiospore color and pigment secretion on the PDA medium. Subsequent microscopic examination confirmed the presence of three isolates: T . harzianium , T . asperellum , and an unidentified Trichoderma species. Importantly, two of these isolates, T . harzianium and T . asperellum , were categorized within the Harzianum clade and Hamatum sub branch, respectively, while the third isolate, Trichoderma sp., could not be conclusively identified.
F . udum In the dual culture technique, it was noted that among the tested bacterial isolates, four endophytic and five rhizosphere isolates effectively inhibited the growth of F . udum by more than 60%. Specifically, the endophytic bacterial strains identified as Eb-21, Eb-13, Eb-8, and Eb-11 exhibited inhibition percentages of 72.22%, 65.11%, 64.44%, and 62.88%, respectively. In contrast, rhizosphere bacteria labeled as Rb-18, Rb-14, Rb-19, Rb-4, and Rb-11 exhibited inhibition percentages of 71.11%, 68.44%, 65.3%, 64.8%, and 62.11%, respectively (Fig. ). T . harzianum , T . asperellum , and Trichoderma sp. exhibited inhibition percentages of 65%, 60%, and 55%, respectively, against F . udum .
Based on their antifungal characteristics, nine bacterial strains and three Trichoderma species were selected for molecular identification. The Polymerase Chain Reaction (PCR) method was utilized to amplify fragments of the bacterial 16S rRNA gene. Subsequently, the obtained 16S rRNA gene sequences were compared against the NCBI nucleotide database using the Basic Local Alignment Search Tool (BLAST). The results of this comparison led to the identification of the isolates as follows: Rb-4 ( Bacillus sp.), Rb-11 ( B . subtilis ), Rb-14 ( B . megaterium ), Rb-18 ( B . subtilis ), Rb-19 ( B . velezensis ), Eb-8 ( Bacillus sp.), Eb-11 ( B . subtilis ), Eb-13 ( P . aeruginosa ), and Eb-21 ( P . aeruginosa ). The genetic sequences were subsequently deposited into the NCBI GenBank, and specific accession numbers were obtained (Fig. , Table ). Similarly, for the Trichoderma isolates, BLAST analysis was employed to compare their fungal TEF (Translation Elongation Factor 1-α gene) and small ribosomal gene (18S rRNA gene) sequences with existing Trichoderma sequences in the NCBI database. The BLAST analysis confirmed that the amplified TEF and ITS gene sequences from the Trichoderma isolates showed similarity to known Trichoderma species. Consequently, the sequences were submitted to the NCBI GenBank, securing accession numbers: ITS (MZ348898) TEF (PP060450) for T . harzianum , ITS (MZ411690) TEF (PP060451) for T . asperellum , and ITS (MZ411691) TEF (PP060452) for Trichoderma sp. (Fig. ).
Bacterial isolates that demonstrated inhibitory effects on F . udum in dual culture experiments underwent biochemical characterization. Among these isolates, all tested positive for the catalase test, seven displayed a positive gram reaction, six exhibited positive results for amylase and oxidase tests and two indicated positive outcomes for citrate utilization and methyl red reduction tests. However, none of the isolates showed a positive result in the indole production test (Table ).
A total of nine potential bacterial isolates, which exhibited inhibitory effects against F . udum in a dual culture technique, underwent in vitro assessment for their growth promoting activities. The cellulase activity of these potential bacterial isolates was evaluated using CMC agar media. The presence of a halo zone around the colony was considered a positive outcome for this test, and variations were observed among the isolates. Specifically, four isolates, namely Eb-8, Eb-21, Rb-14, and Rb-18, exhibited halo zones around their colonies. None of the isolates showed hydrogen cyanide (HCN) production. Interestingly, it was noted that the rhizosphere bacterial population (Rb-18) displayed a higher capacity for siderophore production compared to the endophytic bacteria (Eb-21). Ammonia production was recorded in three isolates Eb-21, Rb-11 and Rb-18. Additionally, bacterial isolates demonstrating the ability to solubilize inorganic phosphate, potassium, and zinc were assessed based on the formation of clear halo zones in Pikovaskaya’s, Aleksandrov, and Trisminimal agar plates, respectively. In Pikovaskaya’s medium, isolates Eb-21, Rb-14, and Rb-18 exhibited the formation of halo zones (Supplementary Fig. ). Similarly, on Aleksandrov agar plates, Rb-11 and Rb-18 displayed a halo zone, and on zinc supplemented Trisminimal agar, Eb-21, Rb-11, Rb-14, and Rb-18 exhibited halo zones (Table ).
Fusarium wilt under pot conditions The potted plants experiment aimed to evaluate the effectiveness of various biocontrol agents, namely B . subtilis , P . aeruginosa , T . harzianum , T . asperellum , and Trichoderma sp., in reducing Fusarium wilt in Pigeonpea. The disease incidence in the control group without any treatment (T6) was high at 93.33%. However, the treatment involving P . aeruginosa and F . udum (T2) exhibited the lowest disease incidence at 20%. This was followed by the treatments with T . harzianum + F . udum (T3) at 21.66%, B . subtilis + F . udum (T1) at 23.33%, T . asperellum + F . udum (T4) at 26.66%, and Trichoderma sp. + F . udum (T5) at 29.33% (Table ).
Fusarium wilt In this study, the enzymes associated with plant induced systemic resistance (ISR), including peroxidase (POD), polyphenol oxidase (PPO) and phenylalanine ammonia lyase (PAL), were investigated in vitro. Prospective biocontrol bacteria and Trichoderma spp. isolates were introduced to the plants. The results of the study showed that the highest levels of peroxidase (POD) and polyphenol oxidase (PPO) activity were observed in plants treated with P . aeruginosa + F . udum (1.53) (POD), 1.53 (PPO) and (27) (PAL)) followed by B . subtilis + F . udum and T. harzanium + F. udum . Notably, the POD, PPO, and PAL activity levels were significantly higher in plants treated with bacteria compared to those treated with fungi. Enzyme activity showed a notable increase in all treatments, peaking at 72 h before gradually declining. Control plants, which were neither exposed to the pathogen nor the biocontrol agents, exhibited consistent enzyme activity levels across all time intervals. In contrast, plants treated with the pathogen did not display any significant POD, PPO, or PAL activity when compared to plants treated with the biocontrol agents (Supplementary Figs. , , ).
Fusarium wilt under sick plot conditions The potential fungal and biocontrol agents were applied as seed treatments on the wilt susceptible cultivar ICP2376 and evaluated for their effectiveness against pigeonpea wilt in sick plots over four seasons (2021–2022 Kharif, 2021–2022 Rabi, 2022–2023 Kharif, 2022–2023 Rabi). In all treatments during these four seasons, the lowest mean incidence of the disease was observed in T2 (33.33) ( P . aeruginosa ) and T3 (35.41) ( T . harzanium ) followed by T6 (36.5) (Carbendizim), T1 (36.66) ( B . subtilis) , T4 (52.91) ( T . asperellum ) and T5 (53.33) ( Trichoderma sp.) (Table ; Fig. ). AMMI ANNOVA ANOVA of seven Treatments (T) over four Environments (E) showed that 0.24% of the total SS was attributed to Environments (E) effect; 95.08.% to Treatments (T) effects and 0.88% to Treatments by Environments (T × E) interaction effects. The T × E was further divided into Interaction Principal Component Axis (IPCA) and residuals, in which IPCA1 has contributed (49.01%) of interaction SS followed by IPCA2 which contributed (37.03%) of interaction SS and IPCA1 and IPCA2 cumulatively contributed to (97.411%) of the total interaction (Table ). AMMI 1 Biplot display The AMMI1 biplot was employed to analyze the average disease incidence and IPCA1 scores of seven treatments in four different environments. It revealed that treatments on the left side of the perpendicular line exhibited lower disease incidence, with T2 having the lowest, followed by T3 and T1. Conversely, treatments on the right side of the perpendicular line displayed higher disease incidence, with T6 having a particularly higher incidence (Fig. ).
ANOVA of seven Treatments (T) over four Environments (E) showed that 0.24% of the total SS was attributed to Environments (E) effect; 95.08.% to Treatments (T) effects and 0.88% to Treatments by Environments (T × E) interaction effects. The T × E was further divided into Interaction Principal Component Axis (IPCA) and residuals, in which IPCA1 has contributed (49.01%) of interaction SS followed by IPCA2 which contributed (37.03%) of interaction SS and IPCA1 and IPCA2 cumulatively contributed to (97.411%) of the total interaction (Table ).
The AMMI1 biplot was employed to analyze the average disease incidence and IPCA1 scores of seven treatments in four different environments. It revealed that treatments on the left side of the perpendicular line exhibited lower disease incidence, with T2 having the lowest, followed by T3 and T1. Conversely, treatments on the right side of the perpendicular line displayed higher disease incidence, with T6 having a particularly higher incidence (Fig. ).
Fusarium wilt, caused by the fungus F . udum, poses a significant threat to pigeonpea cultivation worldwide, leading to substantial yield losses , . F . udum persists in the soil for extended periods through the formation of chlamydospores and acts as a hemibiotroph when it resides on infected plant remains , . The prolonged persistence of the fungus in the soil and plant debris hampers disease management using conventional methods such as crop rotation and flooding , . Currently, chemical control methods are commonly employed to address this serious wilt disease . While fungicide application has proven helpful up to seed treatment, it is neither feasible nor economical for crops in the field due to the soil borne nature of F . udum . Moreover, there is a possibility of the pathogen developing resistance to commonly used fungicides . Environmental safety concerns also drive the exploration of alternative management strategies that are sustainable in the long run. Although certain resistant pigeonpea cultivars against Fusarium wilt have been identified previously, questions remain regarding the durability of field resistance to F . udum infection over time under field conditions . Additionally, challenges arise from the evolution of new pathogen variants, the presence of location specific isolates, and the physiological specialization within the Fusarium sp. complex, which hinder successful wilt disease management in pigeonpea. Earlier studies on pathogenic variability in pigeonpea wilt have reported three different pathogenic groups , five pathogenic variants , and nine variants . While soil solarization can address some of these challenges, it has adverse effects on soil quality and beneficial microorganisms . Biological control emerges as an alternative approach to combat soil borne diseases . Biocontrol agents sourced from the native rhizosphere and within plant tissues are preferred due to their adaptability to local soil and climatic conditions . Moreover, the composition of beneficial microbial populations in the rhizosphere is influenced by both plant root exudates and soil characteristics . However, the fertile alluvial soils rich in organic matter found in Samastipur and Muzaffarpur districts of Bihar, influenced primarily by sediment deposition from the Gangetic alluvium in the Indo-Gangetic plains, support the growth of bioagents capable of effectively managing wilt diseases and promoting plant growth. Consequently, our recent study aimed to investigate the potential of native microflora isolated from various rhizosphere zones in Bihar for the biocontrol of Fusarium wilt in pigeonpea, as well as for enhancing plant growth. In our study, we assessed 100 endophytic bacteria, 100 native rhizosphere bacteria, and three Trichoderma spp. against F . udum . Among these, four endophytes, five rhizosphere bacteria, and three Trichoderma spp. exhibited inhibition rates exceeding 60% compared to the control, indicating their potential as promising isolates. Similar findings were reported by , , who observed that endophytic and rhizosphere bacteria effectively suppressed F . udum growth by inhibiting mycelial development and spore germination. Consistent with our results reported that rhizobacteria from pigeonpea demonstrated fungicidal effects against F . udum . This fungicidal activity was attributed to the synthesis of various biocidal substances, including antifungal metabolites, chitinolytic compounds, enzymes capable of breaking down cell walls, and volatile compounds with antifungal properties like ammonia and cyanide. In laboratory conditions, it was observed that certain rhizobacteria, namely Rb-4, Rb-11, Rb-14, Rb-18, and the endophytic bacterium Eb-21, demonstrated the capability to produce siderophores. In natural soil environments, the production of siderophores is more prevalent among the rhizobacterial community . The synthesis of siderophores by rhizobacteria plays a crucial role in their capacity to regulate the growth of pathogens. This is achieved by diminishing the availability of ferric ions in the rhizosphere, effectively inhibiting the growth and virulence of soil borne plant pathogens. An illustrative example of this phenomenon is seen in P . aeruginosa , which, when capable of producing siderophores under laboratory conditions, exhibits a broad spectrum of antagonistic effects against pathogens like F . ciceri and F . udum , . Similarly, research has indicated that strains of B . atrophaeus and B . subtilis , proficient in siderophore production, can effectively suppress the growth of wilt disease causing pathogens in crops such as cotton ( Fusarium oxysporum) and pepper both under in vitro and in vivo conditions. Plant Growth Promoting Rhizobacteria (PGPR) possess the ability to produce compounds like hydrogen cyanide (HCN) and ammonia (NH3), which play a dual role in inhibiting fungal growth and promoting plant development , . Notably, the ammonia produced by PGPR disperses in the soil, effectively eliminating infectious propagules of specific plant pathogens . Additionally, it serves as a nitrogen source for host plants, facilitating the growth of roots and shoots, ultimately increasing overall biomass , . In our current study, three bacterial isolates, namely Eb-21, Rb-11, and Rb-18, exhibited positive ammonia production. These results align with previous findings on NH3 production by rhizospheric strains of Bacillus and Pseudomonas under in vitro conditions. Furthermore, these strains effectively managed disease incidence caused by F . udum in in vivo conditions . However, it is important to note that all nine isolates tested negative for the HCN test in this study. In a related investigation by , it was documented that two rhizosphere strains of B . subtilis and two endophytic bacterium strains of P . aeruginosa also exhibited an inability to produce HCN. Furthermore, biocontrol agents employ critical mechanisms such as cell wall-degrading enzymes, notably cellulase, to regulate soilborne pathogens . Cellulase exhibits a potent inhibitory effect on the hyphal growth of fungal pathogens by hydrolyzing the 1,4-β- d -glucosidic linkages in cellulose, playing a significant ecological role in recycling cellulose, a major polysaccharide in nature , . This degradation process involves various cellulolytic enzymes such as cellulases/endoglucanases, exo-glucanases, and β-glucosidases, which synergistically convert cellulose into β-glucose. In our study, bacterial isolates Eb-8, Eb-21, Rb-14, and Rb-18 exhibited positive cellulase production, consistent with previous findings indicating that biocontrol agents produce lytic enzymes and cellulase to degrade pathogen cell walls . Similarly, research by , has demonstrated the inhibitory effects of cellulases produced by bacteria from the Bacillus and Pseudomonas genera on the growth of phytopathogenic fungi, thereby contributing to disease suppression in chickpea and pigeonpea wilt. Phosphorus (P), Potassium (K), and zinc (Zn) are essential macronutrients crucial for biological growth and development. However, the concentrations of soluble P, K, and Zn in the soil are typically low because the majority of these nutrients exist in insoluble forms within rocks, minerals, and other deposits , . PGPR play a crucial role in mobilizing these nutrients in the rhizosphere, making them accessible to plants , . Under in vitro conditions, rhizosphere bacteria, specifically Rb-18 and Rb-11, demonstrated the ability to solubilize inorganic phosphorus, potassium, and zinc. The solubilization of minerals was notably more efficient in rhizosphere bacteria compared to endophytic bacteria. Several studies have also demonstrated the involvement of rhizospheric Bacillus and Pseudomonas genera in the solubilization of phosphorus, potassium, and zinc under both controlled and field conditions, resulting in enhanced plant growth and yield – . In the potted plant experiment, treatments T2 ( P . aeruginosa + F . udum ), T3 ( T . harzianum + F . udum ), T1 ( B . subtilis + F . udum ), and T4 ( T . asperellum + F . udum ) demonstrated a significant reduction in the incidence of wilt disease. This aligns with findings from previous studies , , which also found that native Pseudomonas spp., Bacillus spp., and Trichoderma spp. isolated from the rhizosphere of pigeonpea effectively reduced pigeonpea wilt disease under in vitro experiments. Beneficial microorganisms often adopt an indirect strategy to enhance a plants resistance against invading phytopathogens by stimulating the plants defense mechanisms. In our study, we focused on inducing Systemic Resistance (ISR) in pigeonpea exposed to antagonistic microbes, including B . subtilis , P . aeruginosa , T . harzianum , T . asperellum , and Trichoderma sp., in the presence of the wilt causing pathogen F . udum . Additionally, we observed that plants inoculated with F . udum but lacking these bioagents exhibited a reduction in the activity of defense related antioxidant enzymes, including POD, PPO, and PAL. The increased activity of the host plant's defense system, particularly the enzymes POD, PPO, and PAL, can be attributed to the secretion of siderophores, chitinase, and protease by these microbes. These compounds act as signaling molecules that activate systemic resistance . Several studies have demonstrated that Plant Growth Promoting Rhizobacteria (PGPR) can trigger various defense responses in host plant tissues, including the enhancement of antioxidant defense enzyme activity during pathogen attacks , . Multiple case studies provide evidence that the inoculation of PGPR can activate ISR related antioxidant enzymes, leading to a reduction in the severity of diseases caused by F . udum in pigeonpea. For instance, treatments involving B . subtilis , P . aeruginosa , and Trichoderma spp. have been shown to activate ISR related antioxidant enzymes, ultimately mitigating the impact of F . udum induced diseases in pigeonpea . In subsequent field investigations, the application of seed treatment with antagonistic microbes, including P . aeruginosa (33.33%), T . harzianum (35.41%), B . subtilis (36.66%), and T . asperellum (52.91%), demonstrated effectiveness in reducing the incidence of wilt disease in pigeonpea plants under disease challenged conditions. Numerous rhizosphere microbes have showcased their ability to alleviate the detrimental impacts of both biotic and abiotic stress factors, ultimately fostering plant growth and development . Previous studies have indicated that T . harzianum and T . asperellum exhibit mycoparasitic activity against soil borne pathogens by releasing compounds such as stigmasterol and ergosterol , . Moreover, soil applications of T . harzianum have been demonstrated to reduce the population of F . udum in the soil, consequently decreasing the occurrence of pigeonpea wilt . Additionally, P . aeruginosa produces antibiotics like oxychlororaphin and phenazine-1-carboxylic acid, which have proven effective in reducing Fusarium wilt in both chickpea and pigeonpea . Extracellular proteins from B . subtilis have been found to induce flocculation and vacuolation in F . udum mycelium . The diverse antimicrobial compounds produced by these beneficial microbes hinder the growth, metabolism, and pathogenicity of various fungal phytopathogens . Consequently, these beneficial fungal and bacterial microbes effectively alleviate the severity of F . udum induced wilt disease. This observation is supported by a report from suggesting that antagonistic strains of Pseudomonas , Bacillus , and Trichoderma spp. genera, isolated from the pigeonpea rhizosphere, significantly reduce the severity of wilt disease caused by F . udum in host plants. Additionally, these rhizobacterial inoculations have been shown to enhance the growth characteristics of host plants compared to untreated controls . AMMI ANNOVA of all five Treatments (T) over four Environments (E) showed that 0.24% of the total SS was attributed to Environments (E) effect; 95% to Treatments (T) effects and 0.88% to Treatments by Environments (T x E) interaction effects. A large SS for Treatments (T) revealed the huge differences among the mean disease incidence causing most of the variations within the reactions of the treatments – .
In summary, this study highlights the serious threat of Fusarium wilt in Pigeonpea and the limited effectiveness of conventional management methods. Indigenous biocontrol agents, such as P . aeruginosa (Eb-21), T . harzianum , and B . subtilis (Rb-18), have shown promise in controlling Fusarium wilt in both lab and field settings. They exhibited antagonistic activity against F . udum , boosted beneficial enzyme activity, and strengthened pigeonpea's resistance mechanisms. Over four seasons of field trials, treatments with P . aeruginosa and T . harzianum consistently had the lowest disease rates. This research emphasizes the potential of these biocontrol agents as sustainable alternatives to traditional fungicides and resistant cultivars for managing Fusarium wilt.
Supplementary Information.
|
Frontline assessors’ opinions about grading committees in a medicine clerkship | 0694dc73-ddc2-4823-9ac6-dfaaad62c496 | 11151467 | Internal Medicine[mh] | Undergraduate medical education programs utilize summative assessments to compare student performance against defined learning objectives, judging whether students have achieved the knowledge, attitudes, and skills needed to successfully complete their current course or clerkship and transition to the next phase of their training . This system upholds the importance of patients as stakeholders in medical education, and it ensures trainees can competently deliver care to the extent that their phase of training permits . Medicine clerkship grades, a form of summative assessment, serve as indicators of student achievement relative to the clerkship’s pre-determined competencies and provide feedback to students on their clinical skills and knowledge . At many medical schools, Internal Medicine clerkship grades are based on a combination of standardized written tests, objective structured clinical examinations, and workplace performance assessments, which are completed by faculty and residents with particular emphasis on direct observations of clinical performance . Ideally summative assessment systems, including clerkship grades, would ensure that medical schools graduate competent physicians ready for the next phase of training in a manner free of bias. However, there is growing concern about grading accuracy and fairness from both students and clinical supervisors, especially in light of the high value placed on these grades during awards and residency program selection processes . The challenge of grading reliability stems, in part, from inter-institutional and inter-clerkship variability in grading practices, as well as interrater differences in subjective judgement of student performance . Furthermore, increasing evidence suggests gender and racial bias contribute to grading discrepancies, including at our own institution, Washington University School of Medicine in St. Louis (WUSM) . Collective decision-making by grading committees has been proposed as a strategy to improve the fairness, transparency and consistency of grading compared to individual grader assessment . Moreover, implementation of grading committees allows for a holistic discussion of student performance, with internal support for difficult decisions . In essence, shared decisions are thought to be superior to decisions made by individuals . This strategy has already been adopted in graduate medical education (GME), with assessment of resident and fellow physician performance occurring via Clinical Competency Committees . In 2020–2021, WUSM instituted grading committees in the assessment of medical students on core clerkships. Before then, clerkship workplace performance assessments consisted of written and verbal evaluations. Supervising faculty and residents were also asked to submit a final grade to the Clerkship Director regarding the student’s clinical performance, based on a grading system of honors, high pass, pass, or fail. The Clerkship Director would then finalize clinical grades based on the composite of assessment data. The WUSM Internal Medicine grading committee introduced in 2020–2021 was composed of eight clinician educators representing a diversity of backgrounds and multiple specialties, such as Primary Care, Community and Public Health, Hospital Medicine, Infectious Diseases, and Rheumatology. All members had existing expertise in medical education and assessment, and all members underwent unconscious bias training. With the introduction of grading committees, frontline assessors, defined as the faculty and residents who supervise medical students in clinical settings, submit assessment data via standardized forms every two weeks. The assessment form utilized in the 2020–2021 academic year started with two general comment boxes asking for global narrative feedback on what the student did well and where they could improve. Next, there was a series of 14 prompts about key domains including medical knowledge, patient care, interpersonal and communication skills, professionalism, and practice-based learning and improvement. Each prompt asked assessors to select descriptors from a list of 4–14 options that best matched the behaviors they observed over the two-week rotation. Grading committees synthesize de-identified assessment data from multiple assessors to assign final clerkship grades. Of note, WUSM underwent curriculum reform and welcomed the Gateway Internal Medicine clerkship in January 2022. The Gateway Internal Medicine clerkship introduced a new competency-based assessment system that continues to employ grading committees but differs in how assessment data are collected and the grades students may earn . Within this article, we focus on the former curriculum and specify when lessons learned were applied to the Gateway clerkship. While the effect of group decision-making on grading fairness is being explored, less is known about the impact of this change on the roles of frontline assessors. In this study, we investigate the use of grading committees in summative assessment decisions, aiming to (1) explore frontline assessors’ opinions about the benefits and challenges of the new grading committee process at WUSM and to (2) understand faculty and resident comfort performing the workplace-based assessments utilized by grading committees to best inform faculty development initiatives at our institution. Design We conducted a qualitative methods study with conventional thematic analysis . We utilized semi-structured focus group interviews to explore the views of our participants. The study was approved by the Institutional Review Board at WUSM (IRB #202,102,048). Setting We conducted this study among assessors involved in the Internal Medicine core clerkship at WUSM and affiliated teaching hospitals, Barnes-Jewish Hospital (BJH) and John Cochran Veterans Affairs Medical Center (VAMC) in St. Louis, Missouri. Focus groups were held from February to April 2021, at the conclusion of the first academic year using grading committees. Focus groups were conducted virtually on WUSM’s HIPAA compliant Zoom platform. Sampling and participants We invited frontline assessors, supervising residents and faculty from both inpatient and outpatient educational sites within the Internal Medicine clerkship, to participate in semi-structured focus groups. Invited attending physicians were educators who supervise medical students in clinical settings. Invited residents were in their PGY-2 or PGY-3 years, as upper-level residents participate in medical student assessment on clerkship rotations. To best bring the general opinions of assessors to the surface for informing faculty development initiatives, grading committee members were excluded from volunteering as interviewees. Grading committee members, who are intimately knowledgeable about the grading committee process, participated as focus group moderators to facilitate honest discussion in the absence of clerkship leadership. Standardized IRB-approved emails inviting participants to volunteer were sent to existing listservs of teaching faculty and residents (convenience sampling). A total of four focus groups were conducted with four separate participant clusters: resident physicians, attending physicians from a variety of outpatient disciplines, and attending physicians from inpatient rotations at BJH and VAMC. Participants volunteered in response to recruitment emails. An IRB-approved consent document was emailed to all potential volunteers, and informed verbal consent was obtained at the start of each focus group meeting. Data collection Focus group questions were designed by research team members (LZ, SL) to investigate multiple facets of grading committees and identify pitfalls most amenable to faculty and resident development at our institution. Questions were fine-tuned through a collaborative, deductive approach among medical education leadership, including the Assistant Dean of Assessment and Associate Dean for Medical Student Education. Final interview questions were revised based on feedback from a mock interview with focus group leaders. Questions covered assessors’ understanding of the grading committee process, perceived and ideal assessor roles, and the benefits and drawbacks of the grading committee (see Additional File ). Facilitators were permitted to ask probing follow-up questions to clarify and expand on comments. We continued to host focus groups until assessors from each of the major teaching services had the opportunity to participate and our data set reached saturation with no new themes identified . Interviews were moderated by one lead discussant with a secondary moderator present to ask additional clarifying questions. Moderators consisted of one junior resident (SL) and three grading committee faculty members (JC, CM, IR). Focus group discussions were recorded and professionally transcribed ( www.rev.com/ ). Transcripts were de-identified prior to qualitative analysis. Data analysis Qualitative data analysis was organized using the commercial online software Dedoose (Dedoose Version 9.0.17, web application for managing, analyzing, and presenting qualitative and mixed method research data, 2021. Los Angeles, CA: SocioCultural Research Consultants, LLC, www.dedoose.com ). Transcripts were independently reviewed by two researchers (SL, NN) to generate an initial code book based on identified commonalities and patterns within focus group responses. The code book was refined by an iterative process of discussion and transcript review. Both researchers independently applied the final code book to all four transcripts. Coding differences were subsequently resolved through group discussion with a third researcher (LZ) until consensus was achieved. Final coded excerpts were reviewed by all three researchers (LZ, SL, NN), which included an attending representative from clerkship leadership, a resident, and a frontline assessor. All coders had advanced training in medical education and represented different roles within medical education, providing a diversity of perspectives. Connections between codes were linked into overarching and interconnecting themes. All authors agreed upon the final codes and themes. We conducted a qualitative methods study with conventional thematic analysis . We utilized semi-structured focus group interviews to explore the views of our participants. The study was approved by the Institutional Review Board at WUSM (IRB #202,102,048). We conducted this study among assessors involved in the Internal Medicine core clerkship at WUSM and affiliated teaching hospitals, Barnes-Jewish Hospital (BJH) and John Cochran Veterans Affairs Medical Center (VAMC) in St. Louis, Missouri. Focus groups were held from February to April 2021, at the conclusion of the first academic year using grading committees. Focus groups were conducted virtually on WUSM’s HIPAA compliant Zoom platform. We invited frontline assessors, supervising residents and faculty from both inpatient and outpatient educational sites within the Internal Medicine clerkship, to participate in semi-structured focus groups. Invited attending physicians were educators who supervise medical students in clinical settings. Invited residents were in their PGY-2 or PGY-3 years, as upper-level residents participate in medical student assessment on clerkship rotations. To best bring the general opinions of assessors to the surface for informing faculty development initiatives, grading committee members were excluded from volunteering as interviewees. Grading committee members, who are intimately knowledgeable about the grading committee process, participated as focus group moderators to facilitate honest discussion in the absence of clerkship leadership. Standardized IRB-approved emails inviting participants to volunteer were sent to existing listservs of teaching faculty and residents (convenience sampling). A total of four focus groups were conducted with four separate participant clusters: resident physicians, attending physicians from a variety of outpatient disciplines, and attending physicians from inpatient rotations at BJH and VAMC. Participants volunteered in response to recruitment emails. An IRB-approved consent document was emailed to all potential volunteers, and informed verbal consent was obtained at the start of each focus group meeting. Focus group questions were designed by research team members (LZ, SL) to investigate multiple facets of grading committees and identify pitfalls most amenable to faculty and resident development at our institution. Questions were fine-tuned through a collaborative, deductive approach among medical education leadership, including the Assistant Dean of Assessment and Associate Dean for Medical Student Education. Final interview questions were revised based on feedback from a mock interview with focus group leaders. Questions covered assessors’ understanding of the grading committee process, perceived and ideal assessor roles, and the benefits and drawbacks of the grading committee (see Additional File ). Facilitators were permitted to ask probing follow-up questions to clarify and expand on comments. We continued to host focus groups until assessors from each of the major teaching services had the opportunity to participate and our data set reached saturation with no new themes identified . Interviews were moderated by one lead discussant with a secondary moderator present to ask additional clarifying questions. Moderators consisted of one junior resident (SL) and three grading committee faculty members (JC, CM, IR). Focus group discussions were recorded and professionally transcribed ( www.rev.com/ ). Transcripts were de-identified prior to qualitative analysis. Qualitative data analysis was organized using the commercial online software Dedoose (Dedoose Version 9.0.17, web application for managing, analyzing, and presenting qualitative and mixed method research data, 2021. Los Angeles, CA: SocioCultural Research Consultants, LLC, www.dedoose.com ). Transcripts were independently reviewed by two researchers (SL, NN) to generate an initial code book based on identified commonalities and patterns within focus group responses. The code book was refined by an iterative process of discussion and transcript review. Both researchers independently applied the final code book to all four transcripts. Coding differences were subsequently resolved through group discussion with a third researcher (LZ) until consensus was achieved. Final coded excerpts were reviewed by all three researchers (LZ, SL, NN), which included an attending representative from clerkship leadership, a resident, and a frontline assessor. All coders had advanced training in medical education and represented different roles within medical education, providing a diversity of perspectives. Connections between codes were linked into overarching and interconnecting themes. All authors agreed upon the final codes and themes. Participant characteristics Of an estimated 230 assessors, twenty-three volunteers participated in our study across four focus groups (Table ). At the resident physician level, both PGY-2 and PGY-3 residents were represented, as PGY-1 residents do not assess WUSM students. At the faculty level, participants ranged in seniority from Instructor to Professor. Faculty represented Internal Medicine subspecialities, Primary Care, and Hospitalist Medicine. Participants’ primary teaching environments included inpatient Medicine, inpatient Cardiology, and outpatient Primary Care or ambulatory subspecialty clinics. Themes Using thematic analysis, four themes emerged – grading fairness, change in responsibility of assessors, challenges of assessment tools, and discomfort with the grading committee transition (Fig. ). Assessors view the switch from individual graders to a grading committee as theoretically beneficial to students due to increased grading fairness and beneficial to faculty due to decreased pressure. Despite these benefits, assessors report ongoing challenges in utilization of assessment tools and discomfort with the grading transition due to an incomplete understanding of the process. Grading fairness Participants universally agreed that switching from individual graders to a grading committee is beneficial, leading to a potentially fairer grading process. They cited that committee-assigned grading is “more standardized” and “objective” due to perceived decreased variability among graders and decreased impact of grader bias (Table , quote a). Some participants expressed concern that they may not have “enough exposure” and face time with students, especially on outpatient rotations where schedules may limit time observing and teaching students. They felt relieved that the grading committee takes into account perspectives from multiple assessors to provide a more complete picture of student performance (Table , quote a). Participants considered that the grading committee values how students’ skills “are growing over the course of” the clerkship, also contributing to a more comprehensive picture of student performance. Only one participant specifically cited that the grading committee evaluates students “blindly” after de-identifying assessment data, while most participants did not cite this factor. A subtheme that emerged from discussions of grading objectivity was grade inflation. Multiple interviewees discussed an institutional history of grade inflation, citing pressure from both students and the institution to provide favorable grades, as well as personal tendencies to “give students the benefit of the doubt” (Table , quote b, c). Participants expressed conflicting opinions of whether grading committees have the potential to relieve grade inflation. Several assessors noted that switching to a grading committee reduced pressure on faculty and residents to provide exaggeratedly positive assessments (Table , quote c) and mitigated the need to build strong, defensive arguments for issuing honest grades due to the more standardized grading process (Table , quote d). Other participants, however, cautioned that there could still be persistent pressure to provide overly positive feedback due to fear of being considered “overly mean” (Table , quote e). Change in responsibility The majority of participants reported a change of responsibility after WUSM transitioned to grading committees. Assessors commented that they “feel less involved with the grading aspect” because they are no longer recommending a grade, but instead “are more involved in…providing an assessment” because they are tasked with describing student behaviors relative to core objectives, while the grading committee interprets their descriptions to generate a grade. This was generally a welcomed change, resulting in more time to focus on student-centered feedback and less “pressure” put on clinical educators to give a final grade, a process that was almost universally considered to relieve stress (Table , quotes a-b). Several participants felt this new domain of responsibility for clinical educators was more in line with an ideal role of teaching faculty (Table , quote a). On the other hand, some participants felt that something was “lost” from grades no longer being assigned by the supervising resident and attending who spend the most face-to-face time with students, especially when it comes to students who are performing at the ends of the spectrum (Table , quote c). They wanted a chance to provide input on the final grade especially for “the students [who] should clearly get one grade or another,” such as for the outstanding or struggling students, but agreed that it is “nice to not necessarily have that responsibility” of assigning final grades for students whose performance may be borderline or unclear. Hospitalists, who most frequently assess learners on clerkship rotations, were most likely to identify a loss of voice in the final grading process. Challenges of assessment tools With the transition to grading committees, participants universally felt increased responsibility to provide detailed information of students’ performance, but they frequently cited barriers to providing high quality data via the 2020–2021 assessment forms. Interviewees generally agreed that faculty and resident time limitations were a major barrier to providing superior feedback (Table , quote a); however, participants harbored differing opinions on the relative technical challenges of the WUSM assessment tool, which incorporates both checklist responses and narrative feedback. For some, the checklist responses addressing student performance across key domains suffered from a lack of “nuance.” Participants worried that outstanding students whose clinical performance exceeds expectations could appear the same on paper as students with consistent but average performance (Table , quote b). Conversely, many assessors struggled with communicating their assessment of students who simultaneously fulfilled performance checkboxes but still fell short of expectations for commendable performance (Table , quote c). They felt that an overall “gestalt” of a student was difficult to communicate using check boxes. For others, narrative assessments were overwhelming and repetitive, leading to assessment fatigue (Table , quote d-e). Participants recognized that the evaluation forms had multiple options to address these preferences (i.e. free text boxes to add nuance/context) but these were not uniformly acceptable or were too cumbersome for users (Table , quote f). Instead, some assessors indicated that they “would rather just talk to a human being,” such as the Clerkship Director, to provide narrative assessment in place of writing. Overall, participants believed they would benefit from training to improve the quality of their assessments to optimize the accurate communication of student performance to grading committees. Discomfort with grading committee transition (and the need for training) The use of grading committees created new sources of discomfort for assessors and concern it would lead to new sources of anxiety for students. Many participants noted apprehension regarding unfamiliarity with the new grading committee process (Table , quote a). They pointed out several areas of uncertainty including how committees utilize assessment forms to synthesize final grades, how one assessor’s evaluations are weighed relative to another, and the relative contribution of standardized exams and performance evaluations. There was a perceived lack of transparency and clarity in the grading committee process (Table , quote c). As one interviewee stated, they felt “in the dark” about how grading committee uses feedback. Several participants also “[perceived] some…increased anxiety” among medical students with the introduction of the grading committee. Students may perceive the grading process as “impersonal” without transparency and be apprehensive about what data is synthesized into a final grade, with what degree of importance. Participants generally had minimal or no prior formalized training in assessment and identified this as an additional area of discomfort (Table , quote a). While there was an assortment of topics that participants felt could be covered for faculty and resident development, they believed mandatory training on general topics such as assessment and feedback would be “less likely … to get a lot of buy-in from people (faculty and residents)” due to scheduling restraints and variable interest. Many participants, however, prioritized a need for “practical training” – specifically, increased guidance on how to complete high quality performance evaluations in order to communicate a comprehensive view of student performance to the receiving grading committee (Table , quote b). All focus groups agreed that training would ideally be delivered in a timely manner in close proximity to resident or faculty time on service. There was not a uniform opinion on the best format to disseminate training, but some frequent suggestions included a module describing how the committee interprets evaluation forms to come to a grading decision, a tutorial walking through the assessment form with a mock student, or an instructional video with “frequently asked questions” about the assessment form. Of an estimated 230 assessors, twenty-three volunteers participated in our study across four focus groups (Table ). At the resident physician level, both PGY-2 and PGY-3 residents were represented, as PGY-1 residents do not assess WUSM students. At the faculty level, participants ranged in seniority from Instructor to Professor. Faculty represented Internal Medicine subspecialities, Primary Care, and Hospitalist Medicine. Participants’ primary teaching environments included inpatient Medicine, inpatient Cardiology, and outpatient Primary Care or ambulatory subspecialty clinics. Using thematic analysis, four themes emerged – grading fairness, change in responsibility of assessors, challenges of assessment tools, and discomfort with the grading committee transition (Fig. ). Assessors view the switch from individual graders to a grading committee as theoretically beneficial to students due to increased grading fairness and beneficial to faculty due to decreased pressure. Despite these benefits, assessors report ongoing challenges in utilization of assessment tools and discomfort with the grading transition due to an incomplete understanding of the process. Grading fairness Participants universally agreed that switching from individual graders to a grading committee is beneficial, leading to a potentially fairer grading process. They cited that committee-assigned grading is “more standardized” and “objective” due to perceived decreased variability among graders and decreased impact of grader bias (Table , quote a). Some participants expressed concern that they may not have “enough exposure” and face time with students, especially on outpatient rotations where schedules may limit time observing and teaching students. They felt relieved that the grading committee takes into account perspectives from multiple assessors to provide a more complete picture of student performance (Table , quote a). Participants considered that the grading committee values how students’ skills “are growing over the course of” the clerkship, also contributing to a more comprehensive picture of student performance. Only one participant specifically cited that the grading committee evaluates students “blindly” after de-identifying assessment data, while most participants did not cite this factor. A subtheme that emerged from discussions of grading objectivity was grade inflation. Multiple interviewees discussed an institutional history of grade inflation, citing pressure from both students and the institution to provide favorable grades, as well as personal tendencies to “give students the benefit of the doubt” (Table , quote b, c). Participants expressed conflicting opinions of whether grading committees have the potential to relieve grade inflation. Several assessors noted that switching to a grading committee reduced pressure on faculty and residents to provide exaggeratedly positive assessments (Table , quote c) and mitigated the need to build strong, defensive arguments for issuing honest grades due to the more standardized grading process (Table , quote d). Other participants, however, cautioned that there could still be persistent pressure to provide overly positive feedback due to fear of being considered “overly mean” (Table , quote e). Change in responsibility The majority of participants reported a change of responsibility after WUSM transitioned to grading committees. Assessors commented that they “feel less involved with the grading aspect” because they are no longer recommending a grade, but instead “are more involved in…providing an assessment” because they are tasked with describing student behaviors relative to core objectives, while the grading committee interprets their descriptions to generate a grade. This was generally a welcomed change, resulting in more time to focus on student-centered feedback and less “pressure” put on clinical educators to give a final grade, a process that was almost universally considered to relieve stress (Table , quotes a-b). Several participants felt this new domain of responsibility for clinical educators was more in line with an ideal role of teaching faculty (Table , quote a). On the other hand, some participants felt that something was “lost” from grades no longer being assigned by the supervising resident and attending who spend the most face-to-face time with students, especially when it comes to students who are performing at the ends of the spectrum (Table , quote c). They wanted a chance to provide input on the final grade especially for “the students [who] should clearly get one grade or another,” such as for the outstanding or struggling students, but agreed that it is “nice to not necessarily have that responsibility” of assigning final grades for students whose performance may be borderline or unclear. Hospitalists, who most frequently assess learners on clerkship rotations, were most likely to identify a loss of voice in the final grading process. Challenges of assessment tools With the transition to grading committees, participants universally felt increased responsibility to provide detailed information of students’ performance, but they frequently cited barriers to providing high quality data via the 2020–2021 assessment forms. Interviewees generally agreed that faculty and resident time limitations were a major barrier to providing superior feedback (Table , quote a); however, participants harbored differing opinions on the relative technical challenges of the WUSM assessment tool, which incorporates both checklist responses and narrative feedback. For some, the checklist responses addressing student performance across key domains suffered from a lack of “nuance.” Participants worried that outstanding students whose clinical performance exceeds expectations could appear the same on paper as students with consistent but average performance (Table , quote b). Conversely, many assessors struggled with communicating their assessment of students who simultaneously fulfilled performance checkboxes but still fell short of expectations for commendable performance (Table , quote c). They felt that an overall “gestalt” of a student was difficult to communicate using check boxes. For others, narrative assessments were overwhelming and repetitive, leading to assessment fatigue (Table , quote d-e). Participants recognized that the evaluation forms had multiple options to address these preferences (i.e. free text boxes to add nuance/context) but these were not uniformly acceptable or were too cumbersome for users (Table , quote f). Instead, some assessors indicated that they “would rather just talk to a human being,” such as the Clerkship Director, to provide narrative assessment in place of writing. Overall, participants believed they would benefit from training to improve the quality of their assessments to optimize the accurate communication of student performance to grading committees. Discomfort with grading committee transition (and the need for training) The use of grading committees created new sources of discomfort for assessors and concern it would lead to new sources of anxiety for students. Many participants noted apprehension regarding unfamiliarity with the new grading committee process (Table , quote a). They pointed out several areas of uncertainty including how committees utilize assessment forms to synthesize final grades, how one assessor’s evaluations are weighed relative to another, and the relative contribution of standardized exams and performance evaluations. There was a perceived lack of transparency and clarity in the grading committee process (Table , quote c). As one interviewee stated, they felt “in the dark” about how grading committee uses feedback. Several participants also “[perceived] some…increased anxiety” among medical students with the introduction of the grading committee. Students may perceive the grading process as “impersonal” without transparency and be apprehensive about what data is synthesized into a final grade, with what degree of importance. Participants generally had minimal or no prior formalized training in assessment and identified this as an additional area of discomfort (Table , quote a). While there was an assortment of topics that participants felt could be covered for faculty and resident development, they believed mandatory training on general topics such as assessment and feedback would be “less likely … to get a lot of buy-in from people (faculty and residents)” due to scheduling restraints and variable interest. Many participants, however, prioritized a need for “practical training” – specifically, increased guidance on how to complete high quality performance evaluations in order to communicate a comprehensive view of student performance to the receiving grading committee (Table , quote b). All focus groups agreed that training would ideally be delivered in a timely manner in close proximity to resident or faculty time on service. There was not a uniform opinion on the best format to disseminate training, but some frequent suggestions included a module describing how the committee interprets evaluation forms to come to a grading decision, a tutorial walking through the assessment form with a mock student, or an instructional video with “frequently asked questions” about the assessment form. Participants universally agreed that switching from individual graders to a grading committee is beneficial, leading to a potentially fairer grading process. They cited that committee-assigned grading is “more standardized” and “objective” due to perceived decreased variability among graders and decreased impact of grader bias (Table , quote a). Some participants expressed concern that they may not have “enough exposure” and face time with students, especially on outpatient rotations where schedules may limit time observing and teaching students. They felt relieved that the grading committee takes into account perspectives from multiple assessors to provide a more complete picture of student performance (Table , quote a). Participants considered that the grading committee values how students’ skills “are growing over the course of” the clerkship, also contributing to a more comprehensive picture of student performance. Only one participant specifically cited that the grading committee evaluates students “blindly” after de-identifying assessment data, while most participants did not cite this factor. A subtheme that emerged from discussions of grading objectivity was grade inflation. Multiple interviewees discussed an institutional history of grade inflation, citing pressure from both students and the institution to provide favorable grades, as well as personal tendencies to “give students the benefit of the doubt” (Table , quote b, c). Participants expressed conflicting opinions of whether grading committees have the potential to relieve grade inflation. Several assessors noted that switching to a grading committee reduced pressure on faculty and residents to provide exaggeratedly positive assessments (Table , quote c) and mitigated the need to build strong, defensive arguments for issuing honest grades due to the more standardized grading process (Table , quote d). Other participants, however, cautioned that there could still be persistent pressure to provide overly positive feedback due to fear of being considered “overly mean” (Table , quote e). The majority of participants reported a change of responsibility after WUSM transitioned to grading committees. Assessors commented that they “feel less involved with the grading aspect” because they are no longer recommending a grade, but instead “are more involved in…providing an assessment” because they are tasked with describing student behaviors relative to core objectives, while the grading committee interprets their descriptions to generate a grade. This was generally a welcomed change, resulting in more time to focus on student-centered feedback and less “pressure” put on clinical educators to give a final grade, a process that was almost universally considered to relieve stress (Table , quotes a-b). Several participants felt this new domain of responsibility for clinical educators was more in line with an ideal role of teaching faculty (Table , quote a). On the other hand, some participants felt that something was “lost” from grades no longer being assigned by the supervising resident and attending who spend the most face-to-face time with students, especially when it comes to students who are performing at the ends of the spectrum (Table , quote c). They wanted a chance to provide input on the final grade especially for “the students [who] should clearly get one grade or another,” such as for the outstanding or struggling students, but agreed that it is “nice to not necessarily have that responsibility” of assigning final grades for students whose performance may be borderline or unclear. Hospitalists, who most frequently assess learners on clerkship rotations, were most likely to identify a loss of voice in the final grading process. With the transition to grading committees, participants universally felt increased responsibility to provide detailed information of students’ performance, but they frequently cited barriers to providing high quality data via the 2020–2021 assessment forms. Interviewees generally agreed that faculty and resident time limitations were a major barrier to providing superior feedback (Table , quote a); however, participants harbored differing opinions on the relative technical challenges of the WUSM assessment tool, which incorporates both checklist responses and narrative feedback. For some, the checklist responses addressing student performance across key domains suffered from a lack of “nuance.” Participants worried that outstanding students whose clinical performance exceeds expectations could appear the same on paper as students with consistent but average performance (Table , quote b). Conversely, many assessors struggled with communicating their assessment of students who simultaneously fulfilled performance checkboxes but still fell short of expectations for commendable performance (Table , quote c). They felt that an overall “gestalt” of a student was difficult to communicate using check boxes. For others, narrative assessments were overwhelming and repetitive, leading to assessment fatigue (Table , quote d-e). Participants recognized that the evaluation forms had multiple options to address these preferences (i.e. free text boxes to add nuance/context) but these were not uniformly acceptable or were too cumbersome for users (Table , quote f). Instead, some assessors indicated that they “would rather just talk to a human being,” such as the Clerkship Director, to provide narrative assessment in place of writing. Overall, participants believed they would benefit from training to improve the quality of their assessments to optimize the accurate communication of student performance to grading committees. The use of grading committees created new sources of discomfort for assessors and concern it would lead to new sources of anxiety for students. Many participants noted apprehension regarding unfamiliarity with the new grading committee process (Table , quote a). They pointed out several areas of uncertainty including how committees utilize assessment forms to synthesize final grades, how one assessor’s evaluations are weighed relative to another, and the relative contribution of standardized exams and performance evaluations. There was a perceived lack of transparency and clarity in the grading committee process (Table , quote c). As one interviewee stated, they felt “in the dark” about how grading committee uses feedback. Several participants also “[perceived] some…increased anxiety” among medical students with the introduction of the grading committee. Students may perceive the grading process as “impersonal” without transparency and be apprehensive about what data is synthesized into a final grade, with what degree of importance. Participants generally had minimal or no prior formalized training in assessment and identified this as an additional area of discomfort (Table , quote a). While there was an assortment of topics that participants felt could be covered for faculty and resident development, they believed mandatory training on general topics such as assessment and feedback would be “less likely … to get a lot of buy-in from people (faculty and residents)” due to scheduling restraints and variable interest. Many participants, however, prioritized a need for “practical training” – specifically, increased guidance on how to complete high quality performance evaluations in order to communicate a comprehensive view of student performance to the receiving grading committee (Table , quote b). All focus groups agreed that training would ideally be delivered in a timely manner in close proximity to resident or faculty time on service. There was not a uniform opinion on the best format to disseminate training, but some frequent suggestions included a module describing how the committee interprets evaluation forms to come to a grading decision, a tutorial walking through the assessment form with a mock student, or an instructional video with “frequently asked questions” about the assessment form. The fairness of medical student clerkship grades has been questioned due to the impact of bias, subjectivity, and interrater reliability. Grading committees and group decision-making are thought to promote grading consistency , especially when student data is reviewed in a de-identified manner. As evidenced by a recent survey of Clerkship Directors in Internal Medicine, many institutions are adopting grading committees as one strategy to improve grading equity . Our study explores the opinions of faculty and resident assessors in the first year after transition to grading committees in the WUSM Internal Medicine clerkship. As we move toward grading committees, understanding assessors’ opinions about the process can facilitate implementation at other institutions, helping medical education leaders identify key stakeholders, lean into points of agreement, and prepare for points of dissent. In this study, assessors unanimously agreed that group decision-making should improve standardization and help minimize the impact of bias and inter-assessor variability. Grading committees, however, are only one component to addressing issues of bias. Assessors can still write biased narratives, feel pressured to inflate evaluations, or demonstrate variable commitment to the submission of descriptive evaluations. Furthermore, grading committee members are still subject to inequities in the integration and prioritization of assessment data. This highlights the ongoing importance of implicit bias training for assessors and grading committee members, a practice that has not yet been universally adopted among medical schools . For high quality assessments, there has also been a shift from personal commentary to behavior-based assessments in the form of clinical competencies, which are often assessed on a scale; however, rating scales are generally perceived to be poor motivators for student learning . As a result, narrative comments remain a critical element of student evaluations, both to facilitate student development as well as to provide holistic context for performance. Narrative feedback can be, however, flawed and prone to stereotyped language . Participants in our study highlighted the challenges of using assessment tools, identifying difficulty with accurate descriptions of performance via both narrative and multiple selection items. Some participants struggled to provide meaningful narrative feedback while others struggled to interpret the clinical competencies addressed on the ratings scales. A key take-away from our study is the importance of providing a diversity of mechanisms for assessors to share their observations, allowing assessors to utilize their strengths and preferences to provide the most accurate assessment data possible. Participants wanted to raise the quality of assessment data they delivered to the grading committee. They believed that practical faculty/resident development sessions specifically geared at assessment could help achieve that goal, especially since most of our participants had no pre-existing formalized training in assessment methods. Notably, these requests for faculty development followed a clerkship-led effort to introduce assessors to the new grading committee role and how assessment forms would be utilized by the committee. These findings underscore the complexity of assessment strategies and reinforce the need for multi-modal, repeated faculty development initiatives at our institution and others. When WUSM transitioned to the Gateway Curriculum in 2022, lessons learned from our study were incorporated into adapted assessment practices within the new curriculum. First, based on the feedback from this initiative as well as the focus on competency-based education, Gateway assessment forms have been streamlined, now comprised of 2–4 Likert scale questions and two boxes for narrative comments. The Gateway Internal Medicine clerkship addressed the challenge of narrative assessments by inviting assessors from inpatient rotations to a teleconference where clerkship leadership guide assessors through semi-structured interviews to provide assessment commentary. In exchange, these assessors are not required to submit written narrative comments. Second, in response to the viewpoints elucidated by this analysis, the Internal Medicine clerkship ramped up the development of frontline assessors’ assessment skills. It uses a multi-faceted approach to assessor development, incorporating didactic sessions, office hours, tip sheets, online modules, and personalized feedback. This approach provides options for assessors to learn the skills needed to assess students in the form they most prefer, and it is delivered iteratively throughout the year. The strengths of our investigation reside within our methods. First, we encouraged honest responses from participants because peers, instead of members of clerkship leadership, conducted the semi-structured interviews. Second, we recruited a diversity of participants from residency, faculty, inpatient, and outpatient specialties. Lastly, our research team reinforced this diversity of perspectives, incorporating the perspectives of medical educators from residency training, faculty, and clerkship leadership into data analysis. Our investigation has limitations. Our focus group participation rate was approximately 10% of total frontline assessors, although our estimate likely overapproximates the total number of individual assessors, thereby underestimating our participation rate. While we recruited a diversity of study participants, this relatively low participation rate may limit the generalizability of our results. Additionally, our focus group participants may represent a subset of assessors who have increased interest in medical education compared to the general population of assessors at a single institution, WUSM. We did not investigate if students perceive increased fairness after transitioning to grading committees nor did we include the perspective of grading committee members with respect to the quality or content of assessments. Therefore, we present a single viewpoint regarding the benefits and shortcomings of grading committees. This study demonstrates that grading committees change the roles and responsibilities of frontline assessors, relieving the grading burden but increasing the emphasis on high quality written assessment, which is a persistent challenge. Faculty and resident development sessions focused on student assessment and constructive narrative feedback may better prepare our assessors for their roles. To this end, there is evidence that rater training can improve faculty confidence in clinical evaluation, however the impact on grading reliability is less clear . More work needs to be done to determine if faculty development improves assessment quality or accuracy. Future investigation of grading outcomes after implementation of grading committees at WUSM is also needed to determine if this change enhanced equity. Below is the link to the electronic supplementary material. Supplementary Material 1: Additional File 1. Interviewer Guide for Focus Group. File contains the interviewers’ guide for focus groups, including consent process and interview questions |
Two-photon all-optical neurophysiology for the dissection of larval zebrafish brain functional and effective connectivity | d46dfd7a-58c0-4eb1-9cb8-5deeaba90da9 | 11452506 | Physiology[mh] | Understanding the functional connectivity of intricate networks within the brain is a fundamental goal toward unraveling the complexities of neural processes. This longtime focal point in neuroscience requires methodologies to trigger and capture neuronal activity in an intact organism. Critical insights into the complex interplay among large populations of neurons have been provided by electroencephalography and functional magnetic resonance imaging – . These gold standard methods, however, do provide a noninvasive means to detect neuronal activity, but with limited spatial (the former) and temporal resolution (the latter), and lack equally noninvasive possibilities to precisely elicit and control it. Therefore, it is evident that deciphering how individual neurons communicate to shape functional neural circuits on a whole-organ scale demands further technological advances. Over the last few decades, with the advent of optogenetics and the widespread adoption of genetically encoded fluorescent indicators , all-optical methods have gained traction for their ability to simultaneously monitor and manipulate the activity of multiple neurons within the intact brain – . In this framework, the ever-increasing use of the tiny and translucent zebrafish larva as a reliable animal model recapitulating manifold features of vertebrate species physiology , has provided momentum for the development and enhancement of optical technologies aimed at imaging and controlling neuronal activity with light at high spatio-temporal resolution – . On the imaging side, previous high-resolution all-optical investigations of zebrafish have made use of two-photon (2P) point scanning methods – or, more rarely, of one-photon (1P) excitation light-sheet fluorescence microscopy (LSFM) . Compared to point scanning approaches, LSFM , allowing parallelization of the detection process within each frame, enables concurrent high spatio-temporal resolution and extensive volumetric imaging. However, the use of visible excitation in 1P LSFM represents an undesired source of strong visual stimulation for the larva, often requiring tailored excitation strategies at least to prevent direct illumination of the eyes . On the photostimulation side, most advanced all-optical setups typically adopt parallel illumination approaches, making use of spatial light modulators (SLM) or digital micromirror devices to generate multiple simultaneous holographic spots of excitation , – . In computer-generated holography, the input laser power is subdivided among the various spots, resulting in increasing energies released on the specimen as the number of effective targets rises and, consequently, in increasing probability of photodamage . Conversely, scan-based sequential stimulation allows a fraction of the power needed by parallel approaches to be deposited at any time on the sample, regardless of the number of targets. As a drawback, however, scan-based methods typically employ mechanical moving parts that constrain the stimulation sequence speed and thus the maximum temporal resolution achievable. An exception is represented by acousto-optic deflectors (AODs) , which are not affected by mechanical inertia and thus enable discontinuous three-dimensional trajectories with constant repositioning time. In particular, featuring an ultrashort access time (μs range), AODs represent the scanning technology that gets closest to parallel illumination performance. Indeed, AODs enable quasi-simultaneous three-dimensional targeting of multiple spots, yet keeping low the global energy delivered. However, despite their extensive use for fast 3D imaging – , these devices have been rarely employed for photostimulation so far – . In this work, we present an all-optical setup consisting of a light-sheet microscope and a light-targeting system equipped with AODs, both employing nonlinear excitation. The light-sheet microscope enables high spatio-temporal resolution volumetric imaging of the larval zebrafish brain, while the light-targeting system is employed to perform concurrent three-dimensional optogenetic stimulation. Using a double transgenic line pan-neuronally expressing both the green fluorescent calcium indicator GCaMP6s and the red-shifted light-gated cation channel ReaChR , we demonstrate a crosstalk-free experimental approach for all-optical investigation of brain circuitries. Leveraging two-photon excitation and the inertia-free light targeting capabilities of AODs, we validated the system functionality by reconstructing the efferent functional and effective connectivity of the left habenula, a cerebral nucleus mainly composed of excitatory neurons, linking forebrain and midbrain structures. A crosstalk-free approach for two-photon all-optical investigations in zebrafish larvae To explore brain functional connectivity in zebrafish larvae, we devised an integrated all-optical 2P system capable of simultaneously recording and stimulating neuronal activity. The setup (Fig. and Supplementary Fig. ), consists of a light-sheet fluorescence microscope and a light-targeting system, specifically designed for fast whole-brain calcium imaging and 3D optogenetic stimulation, respectively. Both optical paths employ pulsed near-infrared (NIR) laser sources for 2P excitation . The 2P LSFM module employing digitally scanned mode, double-sided illumination, control of excitation light polarization and remote focusing of the detection objective, is capable of recording the entire larval brain (400 × 800 × 200 μm 3 ) at volumetric rates up to 5 Hz (Supplementary Movie and Supplementary Fig. ). On the other hand, the light-targeting system incorporates two couples of acousto-optic deflectors to move the excitation focus to arbitrary locations inside a 100 × 100 × 100 μm 3 volume, guaranteeing constant repositioning time (4 μs) independently of the relative distance between sequentially illuminated points, and equal energy delivered independently from the number of targets . To perform simultaneous recording and stimulation of neuronal activity, we employed the pan-neuronal Tg(elavl3:H2B-GCaMP6; elavl3:ReaChR-TagRFP) zebrafish line (Fig. , Supplementary Movie ). Larvae of this double transgenic line express the green fluorescent calcium indicator GCaMP6s inside neuronal nuclei and the red-shifted light-gated cation channel ReaChR (as a fusion protein with the red fluorescent protein TagRFP) on neuronal membranes (Fig. ). We initially investigated the possible presence of crosstalk activation of ReaChR channels due to the excitation wavelength used for functional imaging. To this end, we employed two complementary approaches. First, light-sheet imaging of both double transgenic larvae (ReaChR + ) and GCaMP6s-expressing larvae (ReaChR − , lacking the light-gated channel) was performed for 5 min (volumetric rate: 2.5 Hz, λ ex : 920 nm, laser power at the sample: 60 mW). To evaluate the level of neuronal activity, we computed the standard deviation (SD) over time for each voxel belonging to the brain (image processing voxel size = 4.4 × 4.4 × 5 μm 3 , see Data analysis for details). We adopted SD as a metric for neuronal activity since we found it more sensitive in discriminating between different conditions with respect to the number of calcium peaks per minute, and equally sensitive to the average peak amplitude, yet not necessitating the setting of arbitrary thresholds (Supplementary Fig. ). No major differences could be observed in the average SD distributions computed over a 5-minute exposure to the imaging laser between the two groups (Fig. ). Indeed, the resulting imaging crosstalk index (calculated as the Hellinger distance between the two average distributions, see Data analysis for details) was extremely low (3.9% ± 4.5%; Fig. ). However, since crosstalk activation of light-gated channels by a spurious wavelength is typically power-dependent , we then investigated whether higher powers of the laser used for imaging could induce a significant effect on ReaChR + larvae. Figure shows the average SD distributions obtained from ReaChR + and ReaChR − larvae at imaging powers ranging from 40 to 100 mW. Despite higher laser powers producing a shift of the distributions towards higher SD values, this shift equally affected the neuronal activity of both ReaChR + and ReaChR − larvae (see also Supplementary Fig. ). The differences between the median values of the SD distributions of the two groups (ReaChR + and ReaChR − ) at the same imaging power were not statistically significant (Fig. , right) and, indeed, the imaging crosstalk index remained essentially constant in the power range tested (Supplementary Fig. ). In addition to this, we investigated the presence of imaging-related crosstalk also from a behavioral standpoint. We performed high-speed tail tracking of head restrained ReaChR − and ReaChR + larvae in absence (OFF) and in presence (ON) of whole-brain light-sheet imaging (Fig. and Supplementary Movies and ). As shown in Fig. , compared to the OFF period, during 920 nm laser exposure (ON) both strains showed a slight but not significant increase in the number of tail beats per minute, suggesting that the power applied (60 mW) was quite well tolerated by the animals. Moreover, the relative number of tail beats during imaging ON was not significantly different between ReaChR + and ReaChR − larvae (Fig. ), providing additional proof of the absence of spurious excitation in ReaChR + larvae by the 920 nm laser used for imaging. After demonstrating the absence of cross-talk activation of ReaChR channels upon 2P light-sheet scanning, we investigated the ability of our AOD-based photostimulation system to effectively induce optogenetic activation of targeted neurons. For this purpose, we selected a stimulation wavelength (1064 nm) that is red-shifted relative to the opsin’s 2P excitation peak (975 nm). By doing so, we increased the separation between the wavelength used for optogenetic stimulation and the 2P excitation peak of GCaMP6s (920 nm), thus further reducing the potential for stimulation-induced artifacts. We thus stimulated ReaChR + and ReaChR − larvae at 1064 nm (laser power at the sample: 30 mW, stimulation volume: 30 × 30 × 30 μm 3 ) while simultaneously recording whole-brain neuronal activity via light-sheet imaging. Larvae expressing the opsin showed strong and consistent calcium transients evoked at the stimulation site (Fig. , inset). Conversely, stimulating ReaChR − larvae did not result in any detectable response (Fig. , inset). We quantified the effect of the optogenetic stimulation by computing the distributions of SD values of the voxels inside the stimulation site for ReaChR + and ReaChR − larvae (Fig. , left). 1064 nm stimulation induced statistically significant optogenetic activation of opsin-expressing neurons in ReaChR + larvae (ReaChR − = 0.0277 ± 0.0017, ReaChR + = 0.0608 ± 0.0077, mean ± sem; Fig. , right). Despite the small stimulation volume with respect to the entire brain size, the effect of the photostimulation was also noticeable in the whole-brain SD distribution, where ReaChR + larvae showed a peak slightly shifted toward higher SD values (Fig. , left), which produced a significantly greater average SD (ReaChR − = 0.0201 ± 0.0007, ReaChR + = 0.0211 ± 0.0006, mean ± sem; Fig. , right). This appreciable difference was due to the high-amplitude calcium transients evoked by the stimulation and by the activation of the neuronal population synaptically downstream of the stimulation site. Figure shows the optogenetic activation indeces (calculated as the Hellinger distance between the two average distributions, see Data analysis for details) for the stimulation site and the entire brain (stimulation site = 68.9% ± 0.3%, brain = 8.6% ± 2.1%). To rule out any possible spurious activation effect not related to the optogenetic excitation of ReaChR channels (e.g., sensorial perception of the laser stimulus), we also compared the SD distributions of ReaChR − larvae subjected to imaging only (ReaChR − i ) or to simultaneous imaging and stimulation (ReaChR − i+s ). The analysis highlighted no statistically significant effects of the photostimulation in the absence of opsin expression at either the stimulus site (Supplementary Fig. ) or at a brain-wide level (Supplementary Fig. ). Characterization of calcium transients evoked by 3D optogenetic stimulation After assessing the absence of opsin crosstalk activation upon light-sheet imaging and verifying the ability of our system to consistently induce optogenetic activation of ReaChR + neurons, we characterized the neuronal responses to identify optimal stimulation parameters. We decided to target the stimulation at an easily recognizable cerebral nucleus mainly composed of excitatory neurons. Neurons having their soma inside the habenulae express vesicular glutamate transporter 2 (VGLUT2, also known as SLC17A6 ), representing a coherent group of excitatory glutamatergic neurons , . We therefore directed the stimulation onto the left habenula, an anatomically segregated nucleus that is part of the dorsal-diencephalic conduction system . We adopted a stimulation volume of 50 × 50 × 50 μm 3 , sufficient to cover the entire habenula (Fig. ). This volume was populated with 6250 points distributed across 10 z-planes ( z step: 5 μm). With a point dwell-time of 20 μs, a complete cycle over all the points in the volume took only 125 ms. We first characterized the calcium transients as a function of the stimulation duration (scan time, Fig. ) in the range of 125 to 625 ms (1–5 iterations over the volume). Figure shows the amplitude of the calcium peaks as a function of the scan time. Increasing scan durations produced a progressive increase in peak amplitude until a plateau was reached between 4 and 5 volume cycles (scan time 500-625 ms). From a kinetic point of view, increasing scan durations led to a significant decrease in the rise time of calcium transients (Fig. ). Additionally, the decay time of the calcium transients progressively increased with increasing scan time (Fig. ). We also characterized the neuronal response as a function of the 1064 nm excitation power (ranging from 10 to 40 mW, Fig. ). The amplitude of the calcium transients showed a strong linear dependence on the stimulation power (Fig. , R 2 : 0.89). While the rise time did not seem to be affected by the laser intensity (Fig. ), the decay time showed a strong linear proportionality (Fig. , R 2 : 0.82). The duration of calcium transients (Supplementary Fig. ), instead, increased with increasing stimulation power but was not significantly affected by scan time. Given the small variation in rise time, in both cases the overall duration of the calcium transient was largely determined by the decay time trend. Whole-brain functional circuitry of the left habenular nucleus After this initial technical validation, we employed our all-optical setup to identify cerebral regions functionally linked to the left habenular nucleus. To this end, we designed the following stimulation protocol (Fig. ). For each zebrafish larva, we performed six trials consisting of 5 optogenetic stimuli (interstimulus interval: 16 s) during simultaneous whole-brain light-sheet imaging. Based on the characterization performed, we adopted a stimulus duration of 500 ms (4 complete consecutive iterations over the 50 × 50 × 50 μm 3 volume, point density: 0.5 point/μm) and a laser power of 30 mW to maximize the neuronal response while keeping the laser intensity low (Supplementary Movie ). First, we evaluated the brain voxel activation probability in response to the optogenetic stimulation of the left habenula (LHb). Figure shows different projections of the whole-brain average activation probability map (Supplementary Movies and ). The LHb, the site of stimulation, predictably showed the highest activation probability values. In addition to the LHb, an unpaired nucleus located at the ventral midline of the posterior midbrain showed an increased activation probability with respect to the surrounding tissue. Then, we segmented the entire larval brain into ten different anatomical regions according to structural boundaries (Fig. , left). By extracting the average activation probability from each region (Fig. , right), we found that the deep midbrain district corresponded to the interpeduncular nucleus (IPN). The IPN is a renowned integrative center and relay station within the limbic system that receives habenular afferences traveling through the fasciculus retroflexus – . Figure shows the average normalized activation probability distributions for voxels inside the LHb, IPN and right habenula (RHb, the region with the highest activation probability after LHb and IPN). LHb and IPN neurons exhibited activation probabilities as high as 100% and 51%, respectively. Notably, despite LHb presenting higher activation probabilities than IPN across larvae, higher LHb probabilities did not necessarily correspond to higher IPN probabilities (Fig. ). Figure shows representative mean Δ F / F signals obtained from the LHb (blue) and IPN (yellow) regions during a stimulation trial. The LHb consistently responded to the photostimulation with high-amplitude calcium transients. The IPN showed lower amplitude activations (~1/10 of the LHb), yet reproducibly following the pace induced by LHb stimulation (as also visible in Supplementary Movie , yellow arrowhead). The coherence between these time traces was confirmed by their cross-wavelet power spectrum, showing highest density around the optogenetic trigger rate (1/16 Hz, Fig. ; see also Supplementary Fig. ). As a comparison, Supplementary Fig. shows the cross-wavelet power spectral density of the LHb and RHb activities, where null to low coupling levels emerge. We then examined whole-brain functional connectivity during optogenetic stimulation. To this end, we first extracted the neuronal activity from previously segmented brain regions. Figure shows, as an example, a heatmap of neuronal activity over time during a single stimulation trial. The LHb and IPN were apparently the only two regions following the photostimulation trigger (dark red vertical bars). This result is confirmed and generalized by the chord diagram presented in Fig. . This chart presents the average all-against-all correlation between the neuronal activity of different brain regions. The LHb and IPN were the two anatomical districts that showed the strongest functional connectivity during stimulation (Pearson’s correlation coefficient = 0.605 ± 0.079, mean ± sem). To explore the causal relationships among observed interactions between brain regions, what is known as effective connectivity , we analyzed the Granger causality (GC) of their spatially averaged activities . By examining the added predictability of one time series based on the past values of another, GC analysis allows to draw inferences about directional cause-and-effect relationships between brain activities . In Fig. , the average strength of the directed interaction among brain regions is depicted using the F statistic. The results from GC analysis showed that the activity recorded in the IPN have a strong causal link only with the activity triggered in the LHb (88.83% ± 8.24% of trials are significant for the LHb→IPN direction while only 2.78% ± 2.78% of them are significant for the opposite direction IPN→LHb, mean ± sem). Furthermore, Fig. illustrates the significant directional causality links between brain regions. Notably, compared to the naturally occurring interacting pairs, the optogenetically revealed LHb-IPN pair showed increased consistency among trials in the significance of their interaction direction (arrow width; percentage of significant trial for the directed interaction: Th-HB, 61.11% ± 9.29%; C-HB, 61.11% ± 10.24%; C-Th, 36.11% ± 5.12%; mean ± sem). On the other hand, the strength of the causal link (represented by the F statistic and graphically depicted by arrow color) for the stimulated LHb-IPN pair was comparable to that of spontaneously occurring pairs ( F value: LHb-IPN, 12.16 ± 1.29; PT-HB, 12.10 ± 1.48; Th-PT, 11.40 ± 1.64; PT-HB, 13.98 ± 2.33; mean ± sem). Interestingly, among the causal connections highlighted by GC analysis, causality links (albeit of a lesser extent) emerged also between LHb-RHb and T-RHb pairs. After employing GC analysis to assess the direction of the causality links, we employed partial correlation analysis to assess the directness of the causal connection observed. Partial correlation analysis represents the remaining correlation between two regions after accounting for the influence of all other regions. Results show with a probability of 88.9 ± 7.0% (mean ± sem) that the LHb-IPN link was direct. In contrast, the LHb-RHb pair produced an opposite result (directness probability 30.4 ± 10.9%, mean ± sem), suggesting an indirect connection, while T-RHb is associated with a more controversial result (directness probability 41.6 ± 17.1%, mean ± sem). Next, we investigated the seed-based functional connectivity of the left habenular nucleus. To this end, we computed the Pearson’s correlation between the average neuronal activity in the LHb (seed) and the activity in each brain voxel. Figure shows different projections of the average functional connectivity map of the LHb (Supplementary Movie ). In addition to LHb neurons which exhibited an expected high self-correlation, IPN neurons showed visible higher functional connectivity with respect to other brain regions. This result is confirmed by the analysis of the average correlation coefficient of the different regions (Fig. ), where the IPN was the only region presenting a statistically significant functional connectivity with the LHb. Figure shows the average normalized distributions of correlation coefficients computed from voxels inside the LHb, IPN and RHb. With respect to RHb, which had a distribution basically centered at 0 with a short tail towards negative correlation values, neurons in the LHb and IPN showed functional connectivity values as high as 100% and 65%, respectively. In order to visually isolate the neuronal circuit underlying LHb stimulation, we set a threshold on the correlation coefficient. Based on the results shown in Fig. , we chose a threshold of 0.12 as the highest value separating regions showing significantly higher correlation with the seed activity (namely, LHb and IPN). Figure shows the binarized functional connectivity map of the left habenular nucleus in larval zebrafish (Supplementary Movie ). To explore brain functional connectivity in zebrafish larvae, we devised an integrated all-optical 2P system capable of simultaneously recording and stimulating neuronal activity. The setup (Fig. and Supplementary Fig. ), consists of a light-sheet fluorescence microscope and a light-targeting system, specifically designed for fast whole-brain calcium imaging and 3D optogenetic stimulation, respectively. Both optical paths employ pulsed near-infrared (NIR) laser sources for 2P excitation . The 2P LSFM module employing digitally scanned mode, double-sided illumination, control of excitation light polarization and remote focusing of the detection objective, is capable of recording the entire larval brain (400 × 800 × 200 μm 3 ) at volumetric rates up to 5 Hz (Supplementary Movie and Supplementary Fig. ). On the other hand, the light-targeting system incorporates two couples of acousto-optic deflectors to move the excitation focus to arbitrary locations inside a 100 × 100 × 100 μm 3 volume, guaranteeing constant repositioning time (4 μs) independently of the relative distance between sequentially illuminated points, and equal energy delivered independently from the number of targets . To perform simultaneous recording and stimulation of neuronal activity, we employed the pan-neuronal Tg(elavl3:H2B-GCaMP6; elavl3:ReaChR-TagRFP) zebrafish line (Fig. , Supplementary Movie ). Larvae of this double transgenic line express the green fluorescent calcium indicator GCaMP6s inside neuronal nuclei and the red-shifted light-gated cation channel ReaChR (as a fusion protein with the red fluorescent protein TagRFP) on neuronal membranes (Fig. ). We initially investigated the possible presence of crosstalk activation of ReaChR channels due to the excitation wavelength used for functional imaging. To this end, we employed two complementary approaches. First, light-sheet imaging of both double transgenic larvae (ReaChR + ) and GCaMP6s-expressing larvae (ReaChR − , lacking the light-gated channel) was performed for 5 min (volumetric rate: 2.5 Hz, λ ex : 920 nm, laser power at the sample: 60 mW). To evaluate the level of neuronal activity, we computed the standard deviation (SD) over time for each voxel belonging to the brain (image processing voxel size = 4.4 × 4.4 × 5 μm 3 , see Data analysis for details). We adopted SD as a metric for neuronal activity since we found it more sensitive in discriminating between different conditions with respect to the number of calcium peaks per minute, and equally sensitive to the average peak amplitude, yet not necessitating the setting of arbitrary thresholds (Supplementary Fig. ). No major differences could be observed in the average SD distributions computed over a 5-minute exposure to the imaging laser between the two groups (Fig. ). Indeed, the resulting imaging crosstalk index (calculated as the Hellinger distance between the two average distributions, see Data analysis for details) was extremely low (3.9% ± 4.5%; Fig. ). However, since crosstalk activation of light-gated channels by a spurious wavelength is typically power-dependent , we then investigated whether higher powers of the laser used for imaging could induce a significant effect on ReaChR + larvae. Figure shows the average SD distributions obtained from ReaChR + and ReaChR − larvae at imaging powers ranging from 40 to 100 mW. Despite higher laser powers producing a shift of the distributions towards higher SD values, this shift equally affected the neuronal activity of both ReaChR + and ReaChR − larvae (see also Supplementary Fig. ). The differences between the median values of the SD distributions of the two groups (ReaChR + and ReaChR − ) at the same imaging power were not statistically significant (Fig. , right) and, indeed, the imaging crosstalk index remained essentially constant in the power range tested (Supplementary Fig. ). In addition to this, we investigated the presence of imaging-related crosstalk also from a behavioral standpoint. We performed high-speed tail tracking of head restrained ReaChR − and ReaChR + larvae in absence (OFF) and in presence (ON) of whole-brain light-sheet imaging (Fig. and Supplementary Movies and ). As shown in Fig. , compared to the OFF period, during 920 nm laser exposure (ON) both strains showed a slight but not significant increase in the number of tail beats per minute, suggesting that the power applied (60 mW) was quite well tolerated by the animals. Moreover, the relative number of tail beats during imaging ON was not significantly different between ReaChR + and ReaChR − larvae (Fig. ), providing additional proof of the absence of spurious excitation in ReaChR + larvae by the 920 nm laser used for imaging. After demonstrating the absence of cross-talk activation of ReaChR channels upon 2P light-sheet scanning, we investigated the ability of our AOD-based photostimulation system to effectively induce optogenetic activation of targeted neurons. For this purpose, we selected a stimulation wavelength (1064 nm) that is red-shifted relative to the opsin’s 2P excitation peak (975 nm). By doing so, we increased the separation between the wavelength used for optogenetic stimulation and the 2P excitation peak of GCaMP6s (920 nm), thus further reducing the potential for stimulation-induced artifacts. We thus stimulated ReaChR + and ReaChR − larvae at 1064 nm (laser power at the sample: 30 mW, stimulation volume: 30 × 30 × 30 μm 3 ) while simultaneously recording whole-brain neuronal activity via light-sheet imaging. Larvae expressing the opsin showed strong and consistent calcium transients evoked at the stimulation site (Fig. , inset). Conversely, stimulating ReaChR − larvae did not result in any detectable response (Fig. , inset). We quantified the effect of the optogenetic stimulation by computing the distributions of SD values of the voxels inside the stimulation site for ReaChR + and ReaChR − larvae (Fig. , left). 1064 nm stimulation induced statistically significant optogenetic activation of opsin-expressing neurons in ReaChR + larvae (ReaChR − = 0.0277 ± 0.0017, ReaChR + = 0.0608 ± 0.0077, mean ± sem; Fig. , right). Despite the small stimulation volume with respect to the entire brain size, the effect of the photostimulation was also noticeable in the whole-brain SD distribution, where ReaChR + larvae showed a peak slightly shifted toward higher SD values (Fig. , left), which produced a significantly greater average SD (ReaChR − = 0.0201 ± 0.0007, ReaChR + = 0.0211 ± 0.0006, mean ± sem; Fig. , right). This appreciable difference was due to the high-amplitude calcium transients evoked by the stimulation and by the activation of the neuronal population synaptically downstream of the stimulation site. Figure shows the optogenetic activation indeces (calculated as the Hellinger distance between the two average distributions, see Data analysis for details) for the stimulation site and the entire brain (stimulation site = 68.9% ± 0.3%, brain = 8.6% ± 2.1%). To rule out any possible spurious activation effect not related to the optogenetic excitation of ReaChR channels (e.g., sensorial perception of the laser stimulus), we also compared the SD distributions of ReaChR − larvae subjected to imaging only (ReaChR − i ) or to simultaneous imaging and stimulation (ReaChR − i+s ). The analysis highlighted no statistically significant effects of the photostimulation in the absence of opsin expression at either the stimulus site (Supplementary Fig. ) or at a brain-wide level (Supplementary Fig. ). After assessing the absence of opsin crosstalk activation upon light-sheet imaging and verifying the ability of our system to consistently induce optogenetic activation of ReaChR + neurons, we characterized the neuronal responses to identify optimal stimulation parameters. We decided to target the stimulation at an easily recognizable cerebral nucleus mainly composed of excitatory neurons. Neurons having their soma inside the habenulae express vesicular glutamate transporter 2 (VGLUT2, also known as SLC17A6 ), representing a coherent group of excitatory glutamatergic neurons , . We therefore directed the stimulation onto the left habenula, an anatomically segregated nucleus that is part of the dorsal-diencephalic conduction system . We adopted a stimulation volume of 50 × 50 × 50 μm 3 , sufficient to cover the entire habenula (Fig. ). This volume was populated with 6250 points distributed across 10 z-planes ( z step: 5 μm). With a point dwell-time of 20 μs, a complete cycle over all the points in the volume took only 125 ms. We first characterized the calcium transients as a function of the stimulation duration (scan time, Fig. ) in the range of 125 to 625 ms (1–5 iterations over the volume). Figure shows the amplitude of the calcium peaks as a function of the scan time. Increasing scan durations produced a progressive increase in peak amplitude until a plateau was reached between 4 and 5 volume cycles (scan time 500-625 ms). From a kinetic point of view, increasing scan durations led to a significant decrease in the rise time of calcium transients (Fig. ). Additionally, the decay time of the calcium transients progressively increased with increasing scan time (Fig. ). We also characterized the neuronal response as a function of the 1064 nm excitation power (ranging from 10 to 40 mW, Fig. ). The amplitude of the calcium transients showed a strong linear dependence on the stimulation power (Fig. , R 2 : 0.89). While the rise time did not seem to be affected by the laser intensity (Fig. ), the decay time showed a strong linear proportionality (Fig. , R 2 : 0.82). The duration of calcium transients (Supplementary Fig. ), instead, increased with increasing stimulation power but was not significantly affected by scan time. Given the small variation in rise time, in both cases the overall duration of the calcium transient was largely determined by the decay time trend. After this initial technical validation, we employed our all-optical setup to identify cerebral regions functionally linked to the left habenular nucleus. To this end, we designed the following stimulation protocol (Fig. ). For each zebrafish larva, we performed six trials consisting of 5 optogenetic stimuli (interstimulus interval: 16 s) during simultaneous whole-brain light-sheet imaging. Based on the characterization performed, we adopted a stimulus duration of 500 ms (4 complete consecutive iterations over the 50 × 50 × 50 μm 3 volume, point density: 0.5 point/μm) and a laser power of 30 mW to maximize the neuronal response while keeping the laser intensity low (Supplementary Movie ). First, we evaluated the brain voxel activation probability in response to the optogenetic stimulation of the left habenula (LHb). Figure shows different projections of the whole-brain average activation probability map (Supplementary Movies and ). The LHb, the site of stimulation, predictably showed the highest activation probability values. In addition to the LHb, an unpaired nucleus located at the ventral midline of the posterior midbrain showed an increased activation probability with respect to the surrounding tissue. Then, we segmented the entire larval brain into ten different anatomical regions according to structural boundaries (Fig. , left). By extracting the average activation probability from each region (Fig. , right), we found that the deep midbrain district corresponded to the interpeduncular nucleus (IPN). The IPN is a renowned integrative center and relay station within the limbic system that receives habenular afferences traveling through the fasciculus retroflexus – . Figure shows the average normalized activation probability distributions for voxels inside the LHb, IPN and right habenula (RHb, the region with the highest activation probability after LHb and IPN). LHb and IPN neurons exhibited activation probabilities as high as 100% and 51%, respectively. Notably, despite LHb presenting higher activation probabilities than IPN across larvae, higher LHb probabilities did not necessarily correspond to higher IPN probabilities (Fig. ). Figure shows representative mean Δ F / F signals obtained from the LHb (blue) and IPN (yellow) regions during a stimulation trial. The LHb consistently responded to the photostimulation with high-amplitude calcium transients. The IPN showed lower amplitude activations (~1/10 of the LHb), yet reproducibly following the pace induced by LHb stimulation (as also visible in Supplementary Movie , yellow arrowhead). The coherence between these time traces was confirmed by their cross-wavelet power spectrum, showing highest density around the optogenetic trigger rate (1/16 Hz, Fig. ; see also Supplementary Fig. ). As a comparison, Supplementary Fig. shows the cross-wavelet power spectral density of the LHb and RHb activities, where null to low coupling levels emerge. We then examined whole-brain functional connectivity during optogenetic stimulation. To this end, we first extracted the neuronal activity from previously segmented brain regions. Figure shows, as an example, a heatmap of neuronal activity over time during a single stimulation trial. The LHb and IPN were apparently the only two regions following the photostimulation trigger (dark red vertical bars). This result is confirmed and generalized by the chord diagram presented in Fig. . This chart presents the average all-against-all correlation between the neuronal activity of different brain regions. The LHb and IPN were the two anatomical districts that showed the strongest functional connectivity during stimulation (Pearson’s correlation coefficient = 0.605 ± 0.079, mean ± sem). To explore the causal relationships among observed interactions between brain regions, what is known as effective connectivity , we analyzed the Granger causality (GC) of their spatially averaged activities . By examining the added predictability of one time series based on the past values of another, GC analysis allows to draw inferences about directional cause-and-effect relationships between brain activities . In Fig. , the average strength of the directed interaction among brain regions is depicted using the F statistic. The results from GC analysis showed that the activity recorded in the IPN have a strong causal link only with the activity triggered in the LHb (88.83% ± 8.24% of trials are significant for the LHb→IPN direction while only 2.78% ± 2.78% of them are significant for the opposite direction IPN→LHb, mean ± sem). Furthermore, Fig. illustrates the significant directional causality links between brain regions. Notably, compared to the naturally occurring interacting pairs, the optogenetically revealed LHb-IPN pair showed increased consistency among trials in the significance of their interaction direction (arrow width; percentage of significant trial for the directed interaction: Th-HB, 61.11% ± 9.29%; C-HB, 61.11% ± 10.24%; C-Th, 36.11% ± 5.12%; mean ± sem). On the other hand, the strength of the causal link (represented by the F statistic and graphically depicted by arrow color) for the stimulated LHb-IPN pair was comparable to that of spontaneously occurring pairs ( F value: LHb-IPN, 12.16 ± 1.29; PT-HB, 12.10 ± 1.48; Th-PT, 11.40 ± 1.64; PT-HB, 13.98 ± 2.33; mean ± sem). Interestingly, among the causal connections highlighted by GC analysis, causality links (albeit of a lesser extent) emerged also between LHb-RHb and T-RHb pairs. After employing GC analysis to assess the direction of the causality links, we employed partial correlation analysis to assess the directness of the causal connection observed. Partial correlation analysis represents the remaining correlation between two regions after accounting for the influence of all other regions. Results show with a probability of 88.9 ± 7.0% (mean ± sem) that the LHb-IPN link was direct. In contrast, the LHb-RHb pair produced an opposite result (directness probability 30.4 ± 10.9%, mean ± sem), suggesting an indirect connection, while T-RHb is associated with a more controversial result (directness probability 41.6 ± 17.1%, mean ± sem). Next, we investigated the seed-based functional connectivity of the left habenular nucleus. To this end, we computed the Pearson’s correlation between the average neuronal activity in the LHb (seed) and the activity in each brain voxel. Figure shows different projections of the average functional connectivity map of the LHb (Supplementary Movie ). In addition to LHb neurons which exhibited an expected high self-correlation, IPN neurons showed visible higher functional connectivity with respect to other brain regions. This result is confirmed by the analysis of the average correlation coefficient of the different regions (Fig. ), where the IPN was the only region presenting a statistically significant functional connectivity with the LHb. Figure shows the average normalized distributions of correlation coefficients computed from voxels inside the LHb, IPN and RHb. With respect to RHb, which had a distribution basically centered at 0 with a short tail towards negative correlation values, neurons in the LHb and IPN showed functional connectivity values as high as 100% and 65%, respectively. In order to visually isolate the neuronal circuit underlying LHb stimulation, we set a threshold on the correlation coefficient. Based on the results shown in Fig. , we chose a threshold of 0.12 as the highest value separating regions showing significantly higher correlation with the seed activity (namely, LHb and IPN). Figure shows the binarized functional connectivity map of the left habenular nucleus in larval zebrafish (Supplementary Movie ). Dissecting brain functional and effective connectivity requires advanced technology for “reading” and “writing” neuronal activity. Here, we have presented the application of an all-optical 2P system intended for simultaneous imaging and optogenetic control of whole-brain neuronal activity in zebrafish larvae. Our method employs light-sheet microscopy to perform functional imaging, ensuring comprehensive mapping of the entire brain at a significantly improved temporal resolution compared to conventional 2P point-scanning imaging techniques. To elicit precise photoactivation within the larval brain, our light-targeting unit utilizes two pairs of AODs, enabling the displacement of the focal volume to arbitrary locations. Admittedly, the utilization of AODs for optogenetics has been restricted to 1P photostimulation in 2D – owing to the drop in transmission efficiency along the optical axis, which hinders a homogeneous 2P volumetric excitation. However, as we demonstrated in a previous work , by properly tuning the trains of chirped radio frequency (RF) signals that drive AODs, it is feasible to enhance the uniformity of energy delivery when shifting the focus of the excitation beam. This enhancement has allowed us to proficiently execute optogenetic stimulation of specific targets over a volumetric range of 100 × 100 × 100 μm 3 . Notably, an intriguing aspect of our approach is that, owing to the use of remote focusing of the detection objective and of AODs for stimulation light defocusing, the localization of the photostimulation volume remains entirely independent of the sequential acquisition of different brain planes, thus affording greater flexibility in our experimental investigations. As previously mentioned, our setup exploits 2P excitation both for imaging and optogenetic stimulation. On the imaging side, the use of NIR light to produce the sheet of light leads to a significant reduction of common striping artifacts that otherwise could severely hinder the interpretation of functional data. Nevertheless, due to the nonlinear nature of its excitation and the need to elongate the axial point spread function (PSF) of the illumination beam to produce the light sheet (thus reducing photon density), 2P LSFM is also typically prone to low signal-to-noise ratio. As a result, despite a voxel size (2.2 × 2.2 × 5 μm 3 ) being 30–35% than the average diameter of a neuronal nucleus (6–7 μm), we did not achieve consistent detection of single neurons throughout the entire brain. On the photostimulation side, the use of nonlinear interaction between light and matter enables precise optical confinement of the stimulation volume, without resorting to narrower genetic control of opsin expression, which is typically required when using 1P excitation – . In addition to these aspects, the exclusive use of NIR light as an excitation source, in contrast to visible lasers, dramatically diminishes unwanted and uncontrolled visual stimulation since these wavelengths are scantily perceived by most vertebrate species, including zebrafish , . Nevertheless, we observed that 2P light-sheet imaging can elicit a power-dependent increase in the neuronal activity of zebrafish larvae. Despite not significantly affecting zebrafish behavior, this effect, which may be attributed to non-visual sensory perception of the excitation light, remarks once more the significance of maintaining low the overall energy applied to the sample. To the best of our knowledge, this is the first time that a fully 2P all-optical setup employs light-sheet microscopy for rapid whole-brain imaging and AODs for 3D optogenetic stimulation. With the aim of establishing an all-optical paradigm for investigating the functional and effective connectivity of the larval zebrafish brain, we considered different pairs of sensor/actuator and eventually opted for the GCaMP6s/ReaChR couple. The green calcium reporter GCaMP6s represents a reliable indicator that has undergone extensive evaluation – . On the other hand, the actuator ReaChR, in comparison with other red-shifted opsins, has a slow channel closing mechanism which is particularly suitable for both sequential photostimulation approaches and 2P excitation . A crucial aspect in all-optical studies lies in the separation between the excitation spectra of proteins used for stimulating and revealing neuronal activity. Previous research has demonstrated that the slow channel closing of ReaChR makes this opsin more susceptible to crosstalk activation when scanning the 920 nm imaging laser at power levels exceeding 60 mW . However, in our work, we did not observe a significant increase in cross-activation even at power levels as high as 100 mW. This divergence can be attributed to the peculiar excitation features of 2P light-sheet imaging compared to 2P point scanning imaging. In digitally scanned 2P LSFM, the use of low numerical aperture excitation objectives (to obtain a stretched axial illumination PSF, continuously scanned to produce the sheet of light) results in lower intensities (and thus lower photon density) in comparison to point-scanning methods, for equal laser powers. It is worth noting that, despite the negligible crosstalk, 2P light-sheet imaging may still lead to subthreshold activation of ReaChR + neurons (at 920 nm the opsin retains approximately 25% of the peak action cross-section ), potentially resulting in altered network excitability . Previous studies have employed 1030 nm pulsed lasers to stimulate ReaChR , . The results of our work demonstrate the feasibility of photostimulating ReaChR at 1064 nm, a wavelength red-shifted by almost 100 nm compared to the ReaChR 2P absorption peak (975 nm ). Furthermore, the use of the 1064 nm wavelength for photostimulation, which is red shifted with respect to the tail of the 2P excitation cross-section of GCaMP6s , accounts for the absence of fluorescence artifacts potentially caused by the calcium indicator excitation at the wavelength employed for optogenetic intervention. The characterization of the kinetic features of calcium transients elicited by optogenetic stimulation, which served as a benchmark for identifying the optimal excitation configuration, highlighted two interesting aspects. First, we observed a linear dependence of calcium peak amplitude on the stimulation power applied. This behavior suggests that increasing power produces a proportional increment in the firing rate of ReaChR + neurons. Secondly, we observed a decrease in the calcium transient rise time in response to longer stimulation durations. This result may be attributed to the fact that ReaChR has a channel off rate ( τ -off) of 140 ms , enabling it to integrate photons beyond the duration of a single volume iteration (125 ms). Supporting this hypothesis is the fact that, after two iterations over the stimulation volume (250 ms and on), the rise time remains constant. As the system allows accurate identification of groups of neurons functionally connected with the stimulated ones, we exploited the setup to explore the efferent connectivity of the left habenula. The habenulae are bilateral nuclei located in the diencephalon that are highly conserved among vertebrates and connect brain regions involved in diverse emotional states such as fear and aversion, as well as learning and memory . Like mammals, the habenulae in zebrafish are highly connected hubs receiving afferents from the entopeduncular nucleus , hypothalamus, and median raphe , in addition to left-right asymmetric inputs , . The habenula can be divided into dorsal (dHb) and ventral (vHb) portions (equivalent to the mammalian medial and lateral habenula , respectively), each exhibiting exclusive efferent connections. Specifically, dHb sends inputs to the IPN while vHb projects to the median raphe , . As a consequence, optogenetic stimulation of the entire LHb should lead, in principle, to the activation of both the IPN and the raphe. However, in our experiments, we observed a high probability of activation, strong correlation and causal link only within the IPN population of neurons. This apparent discrepancy can be explained by the fact that, at the larval stage, the vHb represents only a small fraction of the overall habenular volume . As a result, the limited number of vHb neurons would possess a reduced number of connections with the median raphe, resulting in a weak downstream communication. Furthermore, as described by Amo and colleagues , although vHb neurons terminate in the median raphe, no direct contact with serotonergic neurons is observed, suggesting the presence of interneurons that may bridge the link, similar to what is observed in mammals . This inhibitory connection is consistent with the absence of activation of the raphe, which we observed upon left habenular stimulation. Notably, we did not observe any activation in regions downstream of the IPN either. Although adult zebrafish exhibit IPN habenular-recipient neurons projecting to the dorsal tegmental area or griseum centrale , our results corroborate the structural observations of Ma and colleagues from a functional standpoint. Indeed, using anterograde viral labeling of postsynaptic targets, Ma et al. highlighted that in larval zebrafish habenular-recipient neurons of the IPN do not emanate any efferent axon . LHb and the IPN show a high interindividual variability in terms of average activation probability but a lower variability in terms of correlation. This is because larvae may exhibit slightly different opsin expression levels, which result in greater variance in the amplitude of evoked calcium transients and thus a higher activation probability (i.e., the probability of exceeding an arbitrary amplitude threshold). Conversely, the strength of functional connections (i.e., the degree of correlation) appears to not be dependent on the amplitude of evoked neuronal activity. This aspect is also confirmed by the high cross-wavelet power spectral density in a narrow bandwidth centered on the frequency of the triggered optogenetic stimulus, which we observed in the average activity time traces extracted from the LHb and IPN. Functional connectivity refers to the statistical correlations that signify the synchronous activity between brain regions, without necessarily implying a direct causal interaction. Effective connectivity, on the other hand, takes a step further by seeking to understand the causal influence and the direction of the interaction that one neural population has over another. To delve into the realm of effective connectivity we applied Granger causality analysis. GC results confirmed the presence of a causal link between the LHb and the IPN, with the activity in the latter predicted with high consistency only by the activity triggered in the former. Notably, the magnitude of the causal link strength (F statistic) for the LHb-IPN triggered pair is very similar to that of naturally occurring pairs, underlying the efficacy of our methodology in probing brain connectivity. In addition, partial correlation analysis revealed that the link we observed between LHb and IPN is a direct one, with the interaction between the two not intermediated by any other region, a result which is consistent with the presence of an anatomical connection between LHb and IPN via the fasciculus retroflexus . Notably, GC analysis also revealed weaker connections between LHb-RHb and T-RHb. Regarding the former, results from partial correlation analysis highlighted that the link between the two habenulae is most probably indirect. Indeed, no direct connections between the left and right habenulae are known to date, and a crossed feedback circuit passing through the monoaminergic system has been hypothesized . Concerning T-RHb connection, it is known that in zebrafish a small subset of bilateral pallial neurons sends asymmetric innervations which, passing through the stria medullaris and the habenular commissure, selectively terminate in the RHb . Despite this direct anatomical connection, partial correlation analysis produced controversial results regarding the directness for this pair of regions. This result is probably due to the limited number of telencephalic cells contacting the RHb , , whose activity could have been overshadowed by the averaging of the activity on the entire telencephalon. In conclusion, we employed optogenetic stimulation to map the whole-brain functional connectivity of the left habenula efferent pathway in zebrafish larvae. This application has showcased the remarkable capabilities of our 2P setup for conducting crosstalk-free all-optical investigations. The use of AODs for precisely addressing the photostimulation is a hot topic in systems neuroscience, as evidenced by recent conference contributions – . Owing to their discontinuous scanning and constant access time, these devices indeed enable random-access modality. This feature empowers AODs with the native capability to perform rapid sequential excitation over multiple sparsely distributed cellular targets, a feature recently sought after also by SLM adopters . Indeed, rapid sequential stimulation enabled by AODs represents an invaluable tool for studies aiming at replicating a physiological neuronal activation pattern. Future efforts will be devoted to further expanding the volume addressable with AOD scanning while concurrently improving the uniformity of energy delivery. Furthermore, leveraging transgenic strains that express the actuator under more selective promoters (such as vglut2 for glutamatergic and gad1b for GABAergic neurons) will undoubtedly help producing accurate inferences on network structures , , thus boosting the quest towards a comprehensive picture of zebrafish brain functional connectivity. On the imaging side, technical implementations will be made, in order to improve image contrast while maintaining a low laser power on the sample. This advancement will enable the use of automated segmentation algorithms for single-neuron detection. Cell-wise analyses will allow to refine the reconstruction of neuronal effective connectivity, capturing the nuanced differences between individual cells. Together, nonlinear light-sheet microscopy and 3D optogenetics with AODs, along with the employment of larval zebrafish, offer a promising avenue for bridging the gap between microscale resolution and macroscale investigations, enabling the mapping of whole-brain functional/effective connectivity at previously unattainable spatio-temporal scales. Optical setup All-optical control and readout of zebrafish neuronal activity is achieved through a custom system that combines a 2P dual-sided illumination LSFM for whole-brain calcium imaging , , and an AOD-based 2P light-targeting system for 3D optogenetic stimulation (Supplementary Fig. and Fig. ). The two systems have been slightly modified with respect to the previous published versions to optically couple them. Briefly, the 2P light-sheet imaging path is equipped with a pulsed Ti:Sa laser (Chameleon Ultra II, Coherent), tuned at 920 nm. After a group delay dispersion precompensation step, the near-infrared beam is adjusted in power and routed to an electro-optical modulator (EOM) employed to switch the light polarization orientation between two orthogonal states at a frequency of 100 kHz. A half-wave plate and a quarter-wave plate are used to control the light polarization plane and to pre-compensate for polarization distortions. Then, the beam is routed to a hybrid pair of galvanometric mirrors (GMs). One is a fast resonant mirror (CRS-8 kHz, Cambridge Technology) used to digitally generate the virtual light-sheet scanning (frequency 8 kHz) the larva along the rostro-caudal direction. The second GM is a closed-loop mirror (6215H, Cambridge Technology) used to displace the light-sheet along the dorso-ventral direction. The scanned beam is driven by a scan lens and a tube lens into a polarizing beam splitter, which diverts the light alternatively into either of the two excitation arms, according to the instantaneous polarization state imposed by the EOM. In order to maximize fluorescence collection, after the beam splitter in one of the two arms a half-wave plate is used to rotate the light polarization plane so that light coming from both the excitation paths is polarized parallel to the table surface . Through a twin relay system, the beams are ultimately routed into the excitation objectives (XLFLUOR4X/340/0,28, Olympus). The excitation light is focused inside a custom fish water-filled imaging chamber, heated to 28.5 °C. The fine positioning of the sample under the detection objective is performed with three motorized stages. The fluorescence emitted by the sample is collected with a water-immersion objective (XLUMPLFLN20XW, Olympus, NA = 1). Finally, a relay system brings the collected signal to an electrically tunable lens (ETL; EL-16-40-TC-VIS-5D-C, Optotune) which performs remote axial scanning of the detection objective focal plane in sync with the light-sheet closed-loop displacement. The signal collected is filtered (FF01-510/84-25, Semrock) to select green emission. The filtered light reaches an air objective (UPLFLN10X2, Olympus, NA = 0.3), which demagnifies the image onto a subarray (512 × 512 pixels) of an sCMOS camera (ORCA-Flash4.0 V3, Hamamatsu) working at 16-bit depth of integer gray levels. The final magnification of the imaging system is 3×, with a resulting pixel size of 2.2 μm. Below the transparent PMMA bottom of the imaging chamber, a high-speed CMOS camera (Blackfly S USB3, FLIR) equipped with a varifocal objective lens (employed at 50 mm; YV3.3x15SA-2, Fujinon) is positioned to perform behavioral imaging (tail deflections) during light-sheet imaging. Illumination for behavioral imaging is provided by an 850 nm LED (M850L3, Thorlabs) positioned at an angle above the imaging chamber. A bandpass filter (FF01-835/70-25, Semrock) is placed in front of the objective lens for blocking high-intensity light from the 920 nm light-sheet (see Supplementary Fig. ). Recordings are performed using a 300 × 300 pixels subarray of the camera chip, covering the entire larval body. This configuration allows to achieve sufficient magnification (pixel size: 15.4 μm) and contrast to enable live tail tracking. The 3D light-targeting system employs a 1064 nm pulsed laser (FP-1060-5-fs Fianium FemtoPower, NKT Photonics, Birkerød, Denmark) as an excitation source. The output power (max. 5 W) is attenuated and conveyed to a half-wave plate, which is employed to adjust the polarization of the beam, before the first AOD stage (DTSXY-400 AA Opto Electronic, Orsay, France) is reached. The output beam is then coupled with the second AOD stage through two 1:1 relay systems. From the exit of the second stage, by means of a 1:1 relay system, the beam is routed to a pair of galvanometric mirrors (GVS112, Thorlabs). The scanned beam is then optically coupled with a scan lens (AC254-100-B, Thorlabs) and a tube lens ( F = 300 mm, in turn formed by two achromatic doublets - AC254-150-C-MLE, F = 150 mm by Thorlabs, so customized to avoid aberrations). The excitation light is finally deflected by a dichroic mirror (DMSP926B, Thorlabs) toward the back pupil of the illumination objective, which is also employed by the imaging system for fluorescence detection. Optical characterization of the system The detailed optical characterization of the 2P light-sheet system was described in a previous work of our group . Summarizing, each of the light sheets coming from the two excitation arms has a transversal full width at half maximum (FWHM) at waist of 6 µm and a longitudinal FWHM of 327 µm. The lateral FWHM of the detection PSF is 5.2 µm. Herein, we describe the optical performance of the AOD-based light-targeting system used for optogenetic stimulation. When using AODs to move the beam away from its native focus, the illumination axial displacement—or defocus—has a linear relation with the chirp parameter α, i.e., the rate of frequency change of the driving radio waves . We thus measured the axial displacement of the focused beam as a function of α by illuminating a fluorescent solution (Sulforhodamine 101; S7635, Sigma-Aldrich) and localizing the maximum fluorescent peak in the volume as a function of α, which ranged from −1 MHz/µs to 1 MHz/µs (step size 0.1 MHz/µs). For each chirp configuration, the ETL in detection path was used to obtain a 200-µm deep stack (step size: 1 µm) centered at the nominal focal plane of the illumination objective. Supplementary Fig. shows the axial position of the fluorescent intensity peak as a function of the chirp addressed, following an expected linear trend. We evaluated the conversion coefficient from the slope of the linear fit, which was 50.44 ± 3.45 µm/MHz/µs (mean ± sd). We also measured the amount of energy released on the sample as a function of the chirp parameter or, basically, as a function of the time spent illuminating axially displaced targets. Indeed, the beam would spend slightly different periods lighting spots displaced in different z -planes as the effective frequency ramping time is inversely proportional to the chirp parameter α imposed on the RF signals driving the AODs. As explained in detail in a previous work , we partially recovered this non-uniformity in the distribution of power deposited along the axial direction by repeatedly triggering equal frequency ramps within the desired dwell time (here, 20 µs each point), using what we called multi-trigger modality. With respect to the conventional single-trigger modality, we effectively multiplied the minimum energy deposited on different focal planes, while keeping a stable dwell time. Supplementary Fig. shows in black the usual light transmission distribution collected as a function of the chirp parameter (single-trigger modality) and in blue the distribution obtained with our multi-trigger approach. We then measured the point spread function (PSF) of the light-targeting system using subdiffraction-sized fluorescent beads (TetraSpeck microspheres, radius 50 nm; T7279, Invitrogen) embedded in agarose gel (1.5%, w/v) at a final concentration of 0.0025% (vol/vol). The measurements were performed on a field of view of 100 × 100 μm 2 , performing raster scans of 500 × 500 points. The objective was moved axially covering a 200 μm range ( z step: 1 μm) and the emitted signal was conveyed and collected on an auxiliary photomultiplier tube positioned downstream of the fluorescence-collecting objective. The radial and axial intensity profiles of 25 beads were computed using the open-source software ImageJ and fitted with Gaussian functions in Origin Pro 2021 (OriginLab Corp.) to estimate FWHM. Supplementary Fig. shows, as an example, the raw fluorescence distributions of 5 beads and the Gaussian fit corresponding to the average FWHM, plotted in red and black for the radial and axial PSF, respectively. We found them to be FWHMr = 0.81 ± 0.06 µm and FWHMa = 3.79 ± 0.66 µm (mean ± sd). This measurement was performed by driving the AODs with stationary RF signals. To evaluate the eventual illumination spatial distortions arising away from the nominal focal plane of the objective, we repeated the same PSF measurement for different chirps or, in other words, for different AOD-controlled axial displacements (80 µm range, step size of 20 µm). The average FWHM obtained for the bead intensity distribution is shown in Supplementary Fig. . The radial PSF of the system remains approximately constant as a function of the chirp parameter. A small change is due to the chromatic dispersion affecting the laser beam interacting with the crystal. The deflection angle induced by the AODs on the incident beam is frequency and wavelength dependent. This means that a broadband laser is straightforwardly spatially dispersed by the crystal and that the frequency variations can slightly affect this distortion. Moreover, the axial PSF tends to become slightly oblong with increasing axial displacement. This effect is attributable to the temporal dispersion affecting a short-pulsed laser beam interacting with the crystal. This temporal broadening reduces the axial 2P excitation efficiency, generating a larger axial PSF. This effect is more evident when a chirp is applied to the RF signals driving the AODs. Under these conditions, the beam reaches the objective back-pupil in a non-collimated state. Future efforts will be devoted to the compensation of chromatic aberration and temporal dispersion, for example employing a highly dispersive prism upstream of AODs . Zebrafish lines and maintenance The double Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) zebrafish line was obtained from outcrossing the Tg(elavl3:H2B-GCaMP6s) , and the Tg(elavl3:ReaChR-TagRFP) , lines on the slc45a2 b4/- heterozygous albino background , which we previously generated. The double transgenic line expresses the fluorescent calcium reporter GCaMP6s (nucleus) and the red-shifted light-activatable cation channel ReaChR (plasma membrane) in all differentiated neurons. ReaChR is expressed as a fusion peptide with the red fluorescent protein TagRFP to ensure its localization. Zebrafish strains were reared according to standard procedures , and fed twice a day with dry food and brine shrimp nauplii ( Artemia salina ), both for nutritional and environmental enrichment. For the experiments, we employed N = 20, 5 dpf Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) and N = 13, 5 dpf Tg(elavl3:H2B-GCaMP6s) , both of which were on the slc45a2 b4/b4 homozygous albino background. Zebrafish larvae used in the experiments were maintained at 28.5 °C in fish water (150 mg/L Instant Ocean, 6.9 mg/L NaH2PO4, 12.5 mg/L Na2HPO4, 1 mg/L methylene blue; conductivity 300 μS/cm, pH 7.2) under a 14/10 light/dark cycle, according to standard protocols . Experiments involving zebrafish larvae were carried out in compliance with European and Italian laws on animal experimentation (Directive 2010/63/EU and D.L. 4 March 2014, n.26, respectively), under authorization n.606/2020-PR from the Italian Ministry of Health. Zebrafish larvae preparation To select calcium reporter/opsin-expressing larvae for use in the experiments, 3 dpf embryos were subjected to fluorescence screening. The embryos were first slightly anesthetized with a bath in tricaine (160 mg/L in fish water; A5040, Sigma-Aldrich) to reduce movement. Using a stereomicroscope (Stemi 508, Carl Zeiss) equipped with LEDs for fluorescence excitation (for GCaMP6s: blue LED, M470L3; for TagRFP: green LED, M565L3, both from Thorlabs) and fluorescence filters to block excitation light (for GCaMP6s: FF01-510/84-25; for TagRFP: FF01-593/LP-25, both from Semrock), embryos were selected according to the presence of brighter green/red fluorescent signals in the central nervous system. Screened embryos were transferred to a Petri dish containing fresh fish water and kept in an incubator at 28.5 °C until 5 dpf. Zebrafish larvae were mounted as previously described . Briefly, each larva was transferred into a reaction tube containing 1.5% (w/v) low-gelling temperature agarose (A9414, Sigma-Aldrich) in fish water, maintained fluid on a heater set at 38 °C. Using a plastic pipette, larvae were then placed on a microscope slide inside a drop of melted agarose. Before gel polymerization, their position was adjusted using a couple of fine pipette tips for the dorsal portion to face upwards. To avoid movement artifacts during the measurements, larvae were paralyzed by a 10-min treatment with 2 mM d-tubocurarine (93750, Sigma-Aldrich), a neuromuscular blocker. For tail-free preparations, upon gel polymerization, agarose caudal to the swimming bladder was removed using a scalpel. In this case, no paralyzing agent was applied. Mounted larvae were then placed inside the imaging chamber filled with fish water and thermostated at 28.5 °C for the entire duration of the experiment. Structural imaging to evaluate expression patterns in double transgenic zebrafish larvae Confocal imaging of a 5 dpf Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) larva on an albino background was performed to evaluate the spatial expression of the two proteins. The larva was mounted in agarose as described above and deeply anesthetized with tricaine (300 mg/L in fish water). We employed a commercial confocal microscope (T i 2, Nikon) equipped with two continuous wavelength lasers emitting at 488 and 561 nm for GCaMP6s and TagRFP excitation, respectively. Imaging was performed using a 10× objective, allowing the entire head of the animal to fit into the field of view. Using a piezo-electric motor (PIFOC, Physik Instrumente - PI), the objective was moved at 182 consecutive positions ( z step: 2 μm) to acquire the volume of the larval head. Simultaneous whole-brain and behavioral imaging Head restrained larvae, capable of performing wide tail deflections, were imaged from below the 2P LSFM imaging chamber using a dedicated high-speed camera (see Optical setup for details). Images were streamed at 300 Hz via a USB3 connection to a workstation running a custom tool for live tail movement tracking, developed using the open-source Python Stytra package . Larval tail length was divided into 9 segments, and the sum of their relative angles was employed to quantify tail deflection. Tail movements of both ReaChR + and ReaChR − larvae were tracked for 200 s. During the first half 2P LSFM imaging was off (imaging OFF). During the second half larvae were subjected to whole-brain light-sheet imaging (imaging ON) with the same parameters described in the previous section. Each larva measured underwent 3 consecutive 200-s simultaneous whole-brain/behavioral recordings (inter-measurement interval less than 1 min). Simultaneous whole-brain imaging and optogenetic stimulation Whole-brain calcium imaging was performed at 2.5 Hz (a more than optimal volumetric rate considering the typical time constant of the exponential decay for the nuclear localized version of the GCaMP6s sensor τ: 3.5 s ) with 41 stacked z-planes spanning a depth of 200 μm. An interslice spacing of 5 μm was chosen because it coincides with the half width at half maximum of the detection axial PSF. Before each measurement, the scanning amplitude of the resonant galvo mirror was tuned to produce a virtual light-sheet with a length matching the size of the larval brain in the rostro-caudal direction. The laser wavelength was set to 920 nm to optimally excite the GCaMP6s fluorescence. Unless otherwise stated, the power at the sample of the 920 nm laser was set to 60 mW. Optogenetic stimulation was performed at 1064 nm with a laser power at the sample of 30 mW (unless otherwise specified). Before each experimental session, the 1064 nm stimulation laser was finely aligned to the center of the camera field of view. Then, by means of the galvo mirrors present in the stimulation path, the offset position of the stimulation beam was coarsely displaced in the x - y direction toward the center of the area to be stimulated. During the optogenetics experiment the stimulation volume was covered by discontinuously scanning the beam focus via the two pairs of AODs. A typical volume of 50 × 50 × 50 μm 3 was covered with 6250 points (point x-y density: 1 point/0.25 μm 2 ; z step: 5 μm) with a point dwell time of 20 μs (overall time: 125 ms). The medial plane of the stimulation volume (chirp = 0 MHz/μs, null defocus) was adjusted to overlap with the medial plane of the LHb. Unless otherwise stated, each stimulus consisted of four complete cycles of the entire volume, lasting 500 ms. Each stimulation trial consisted of 100 s of whole-brain calcium imaging, during which 5 optogenetic stimuli (interstimulus interval: 16 s, based on the characterization experiments performed, in order to trigger activation events only after the end of the previous calcium transient) were applied at the same volumetric site. Six trials were performed on each larva, with an intertrial interval ranging from 1 to 3 min. Overall, each larva was imaged for 10 min during which it received 30 stimuli. Data analysis Preprocessing Whole-brain calcium imaging data were processed as follows. Images composing the hyperstacks were first 2 × 2 binned (method: average) in the x and y dimensions to obtain a quasi-isotropic voxel size (4.4 × 4.4 × 5 μm 3 ). Then, employing a custom tool written in Python 3, we computed the voxel-wise ΔF/F 0 of each volumetric recording, after background subtraction. F 0 was calculated using FastChrom’s baseline estimation method . Quantification of imaging crosstalk and optogenetic activation extent/specificity To quantify crosstalk during imaging we first considered different metrics to evaluate neuronal activity levels (Supplementary Fig. ). We computed the standard deviation (SD) over time, the number of calcium peaks per minute, and the average peak amplitude of each voxel composing the larval brain during 5 min of whole-brain calcium imaging (Supplementary Fig. ). For automatic calcium peaks identification, we set the following thresholds: minimum peak prominence 0.05; minimum peak FWHM 2.5 s, minimum peak distance 5 s. We found the SD to have improved sensitivity in discriminating between diverse conditions compared to the number of peaks per minute (Supplementary Fig. ). These results reflected those observed by adopting the average amplitude of calcium peaks (Supplementary Fig. ) as an activity metric. We thus employed SD over time as a proxy of neuronal activity levels since its results do not depend on predefined thresholds. Therefore, the distribution of SD values calculated for each brain was first normalized with respect to the total number of voxels and then pooled (method: average) according to the larval strain (ReaChR + and ReaChR − ). Similarly, the normalized distributions of SD values for ReaChR + and ReaChR − larvae subjected to 100 s of whole-brain imaging during which they received 5 photostimulations (1064 nm) were calculated to evaluate the effect of the optogenetic stimulation. Imaging crosstalk and optogenetic stimulation indices were calculated using the Hellinger distance as a measure of dissimilarity between two probability distributions P and Q: [12pt]{minimal} $$H(P,Q)=_{i=1}^{n}_{i}{Q}_{i}}}$$ H ( P , Q ) = 1 − ∑ i = 1 n P i Q i The errors in the Hellinger distances were calculated according to error propagation theory as follows: [12pt]{minimal} $$ H=_{i=1}^{n}_{i}^{2}}{{4H}^{2}}{{ }} {P}_{i}^{2}+_{i}^{2}}{{4H}^{2}}{{ }}{ Q}_{i}^{2}}$$ Δ H = ∑ i = 1 n Q i 2 4 H 2 ⋅ Δ P i 2 + P i 2 4 H 2 ⋅ Δ Q i 2 Finally, normalized distributions of SD values for ReaChR − larvae exposed either to imaging (100 s) only or to imaging and photostimulation (100 s and 5 stimuli at 1064 nm) were calculated to evaluate the specificity of the effect observed. Quantification of tail movements during whole-brain light-sheet imaging Tail deflection (i.e., sum of relative tail segments angles) time traces, were processed to detect and count the number of tail beats. In detail, deflection peaks were considered as tail beats if exceeding an absolute threshold of 20°. Consecutive tail deflections that did not come back to resting position for at least 0.5 s were considered part of the same movement. The relative number of tail beats during imaging ON (Fig. ) was calculated for each trial of each larva dividing the number of tail movements during the imaging ON period by that quantified during the imaging OFF period. To combine behavioral and brain activity recordings (Fig. ), the average fluorescence time trace of the hindbrain acquired at 2.5 Hz was first interpolated to match the frequency of behavioral recordings (300 Hz). Then, ΔF/F 0 was calculated as previously described. Characterization of stimulation-induced calcium transients To characterize neuronal activation as a function of stimulation parameters (scan time and laser power), we first extracted the voxel time series averaged over the entire stimulation site (i.e., left habenula) from 4D ΔF/F 0 hyperstacks. Time traces were windowed to isolate and align the three stimulation events contained in a single trial. Isolated calcium transients were analyzed using the peak analyzer function in Origin Pro 2021 (OriginLab Corp.) to obtain peak amplitude, rise/decay time (i.e., time from baseline to peak and time from peak to baseline, respectively) and duration values. Pooled peak duration data were obtained by first averaging three events of the same larva (intra-individual) and then averaging data between larvae (inter-individual). Activation probability and correlation maps Using a custom Python tool, we calculated the probability of each voxel composing the brain to be active in response to the optogenetic stimulation. For each stimulation event, a voxel was considered active if its change in fluorescence in a 2 s time window after the stimulation exceeded three standard deviations above its baseline level (2 s pre-stimulation). Only events in which the voxels inside the stimulation volume met the activation criterion were considered effective optogenetic stimulations. By iterating this process for all the stimulation events performed (on the same site of the same larva), we calculated the activation probability of each voxel as the number of times the voxel exceeded the threshold divided by the total number of valid stimulations. Employing a second Python tool, we then computed activity correlation maps showing Pearson’s correlation coefficient between each voxel and the activity extracted from the stimulation site (seed). The 3D maps of correlation and activation probability obtained were subsequently aligned. First, the acquired 4D hyperstacks were time averaged. Second, the resulting 3D stack of each larva was registered to a reference brain. Nonrigid image registration was performed using the open source software Computational Morphometry Toolkit (CMTK 3.3.1, https://www.nitrc.org/projects/cmtk/ ) and the ImageJ user interface , employing the command string (-awr 01 -X 52 -C 8 -G 80 -R 3 -A “--accuracy 1.6” -W “--accuracy 0.4”). The calculated morphing transformations were ultimately applied to the corresponding 3D maps. Following the zebrafish brain atlases , , the volumetric regions of interest (ROIs) used in the analysis were manually drawn onto the reference brain (employing ImageJ), based on anatomical boundaries. The 10 volumetric ROIs were then adopted to extract from each map the voxel-wise distribution of activation probability/correlation coefficient values used for further analyses. The binarized functional connectivity map shown in Fig. was obtained after applying a threshold on Pearson’s correlation coefficient to the average correlation map shown in Fig. . The 0.12 value adopted represented the correlation coefficient threshold separating significant from non-significant correlations among brain regions (see Fig. ). Cross-wavelet power spectrum analysis The possible coupling between the delineated brain ROIs and the stimulation site was also characterized in the spectral domain by quantifying and inspecting their cross-wavelet power spectral density (CPSD) . The wavelet transforms of the average activity signals extracted from each ROI were computed using the Morlet mother wavelet, adopting a central frequency f 0 = 1 Hz as time-frequency resolution parameter, and 256 voices per octave for fine frequency discretization. Spurious time-boundary effects were addressed by first applying a zero-padding scheme to the original time series, and then isolating the so-called cone of influence, i.e., the time–frequency region where boundary distortions in the CPSD estimates are negligible . Granger causality analysis The causal link between the activity of different brain regions was explored by analyzing their Granger causality . GC analysis among ΔF/F 0 time series of brain regions was performed in R, with the “lmtest” library . To select an appropriate lag order, we computed both the Akaike (AIC) and Bayesian (BIC) information criterions of the complete autoregressive model for each comparison (each trial and each possible regions pair) for lag orders from 1 to 8 (0.4–3.2 s). Then, for each comparison we selected the lag order associated with the minimum value of the information criterions. Finally, we computed the mode value of this list and used this unique lag order value for every comparison of the final GC analysis. The mode values based on both AIC and BIC resulted the same: a lag order equal to 2, which corresponds to a 0.8 s lag. For each larva, trial, pairs of regions’ activity and causality direction we computed the average F statistic value of the tests. Finally, multiplicity correction for the p-values was performed with a false discovery rate approach using the Benjamini–Hochberg method (GC analysis results are reported in Supplementary Data ). The F statistic was presented in Fig. as average values of all pairs having at least two significant trials. The F statistic in the graph of Fig. was presented as arrows color-mapped according to the average F value found between brain regions’ connections. Direction of the arrow indicates direction of the causality interaction, while arrow width represents the proportion of significant trials over the total. Only causal links having at least 33% of significant trials were depicted (see thresholded matrix in Supplementary Fig. ). Partial correlation analysis In order to gain insight into the directness of the interactions between brain regions, we analyzed the partial correlation between pairs of region-wise mean ΔF/F 0 time series, aiming to capture their residual coupling after the influence of all other regions was accounted for . Pairwise partial correlation coefficients were obtained as described by Han and colleagues . In detail, the partial correlation between a pair of brain regions (i.e., LHb-IPN, LHb-RHb, and T-RHb), A and B, was evaluated as the Pearson’s correlation coefficient between regressed time series ΔF/F 0 A,R and ΔF/F 0 B,R , suitably corrected for the contribution of each other regions’ mean activity signal. These time series were estimated by multiple regression on the original traces ΔF/F 0 A and ΔF/F 0 B , through the evaluation of the Moore-Penrose pseudoinverse of the remaining regions’ time series matrix, C: [12pt]{minimal} $${{{{}}}}_{{{{}}}}={{{{}}}}^{+}{{ }}{ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}$$ β A = C + ⋅ Δ F / F 0 A [12pt]{minimal} $${{{{}}}}_{{{{}}}}={{{{}}}}^{+}{{ }}{ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}$$ β B = C + ⋅ Δ F / F 0 B where C + is the Moore-Penrose pseudoinverse matrix: [12pt]{minimal} $${{{{}}}}^{+}={({{{{}}}}^{{{{}}}}{{{}}})}^{-1}{{{{}}}}^{{{{}}}}$$ C + = ( C T C ) − 1 C T here computed using the Python SciPy library . The regressed time series were then obtained as: [12pt]{minimal} $${ {{{}}}/{{{{}}}}_{0}}_{{{{}}},{{{}}}}={ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}-{{{}}}{{{{}}}}_{{{{}}}}$$ Δ F / F 0 A , R = Δ F / F 0 A − C⋅ β A [12pt]{minimal} $${ {{{}}}/{{{{}}}}_{0}}_{{{{}}},{{{}}}}={ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}-{{{}}}{{{{}}}}_{{{{}}}}$$ Δ F / F 0 B , R = Δ F / F 0 B − C⋅ β B The directness of the mutual interaction between two brain regions was finally detected from the presence of both statistically significant Pearson’s and partial correlation coefficients. When only the Pearson’s correlation is significant the interaction is defined as indirect. While in case only the partial correlation is significant, we are observing what is defined as pseudo-correlation . Results of partial correlation analysis can be found in Supplementary Data . Statistics and reproducibility To guarantee reproducibility of the findings and avoid bias, the larvae employed in the experiments never belonged to a single batch of eggs. No a priori sample size calculation was performed. The sample size employed was justified by the high grade of consistency in the results obtained from different larvae. The expression pattern of GCaMP6s and ReaChR were evaluated in N = 1 ReaChR + larva by confocal imaging. Crosstalk activation of ReaChR by 920 nm excitation light-sheet imaging was evaluated on N = 3 ReaChR + and N = 3 ReaChR − larvae, in the brain activity experiment, and N = 4 ReaChR + and N = 4 ReaChR − larvae in the combined brain/behavioral activities experiment. The effect of optogenetic stimulation was evaluated on N = 6 ReaChR + and N = 6 ReaChR − larvae. Characterization of optogenetically induced calcium transients as a function of stimulation settings was performed on N = 4 ReaChR + larvae ( n = 3 calcium transients per larva). The activation probability, correlation, and causality were evaluated on N = 6 ReaChR + ( n = 30 stimulations per larva). OriginPro 2021 (OriginLab Corp.) was used to carry out all the statistical analyses. Unless otherwise stated, results were considered statistically significant if their corresponding p -value was less than 0.05 (* P < 0.05; ** P < 0.01; *** P < 0.0001). Both intergroup and intragroup statistical significance of imaging crosstalk (Fig. and Supplementary Fig. ) were performed using two-way ANOVA (factors: zebrafish strain, imaging power) followed by post-hoc comparisons with Tukey’s method. Two-way ANOVA and Tukey’s post-hoc comparison were employed also for quantifying the statistical significance of tail beats between imaging OFF and ON conditions (Fig. ; factors: zebrafish strain, imaging presence). For intergroup statistical evaluations of both activation probability (Fig. ) and Pearson’s correlation coefficient (Fig. ), we first verified the normality distribution of data using the Shapiro-Wilk test (see Supplementary Fig. for test results) and then performed one way ANOVA (factor: brain region), followed by post-hoc comparisons employing Tukey’s method. Statistical comparisons of relative number of tail beats during 920 nm imaging (Fig. ) and median SD values to evaluate the effect of optogenetic stimulation (Fig. and Supplementary Fig. ) were performed using unpaired t test. Statistical comparisons of the average distributions of SD (Fig. ) and Pearson’s correlation coefficient (Fig. ) values were performed with the two-sample Kolmogorov-Smirnov test (KS test), applying the Bonferroni correction ( α = 0.05/3 = 0.01667, in both cases). Reporting summary Further information on research design is available in the linked to this article. All-optical control and readout of zebrafish neuronal activity is achieved through a custom system that combines a 2P dual-sided illumination LSFM for whole-brain calcium imaging , , and an AOD-based 2P light-targeting system for 3D optogenetic stimulation (Supplementary Fig. and Fig. ). The two systems have been slightly modified with respect to the previous published versions to optically couple them. Briefly, the 2P light-sheet imaging path is equipped with a pulsed Ti:Sa laser (Chameleon Ultra II, Coherent), tuned at 920 nm. After a group delay dispersion precompensation step, the near-infrared beam is adjusted in power and routed to an electro-optical modulator (EOM) employed to switch the light polarization orientation between two orthogonal states at a frequency of 100 kHz. A half-wave plate and a quarter-wave plate are used to control the light polarization plane and to pre-compensate for polarization distortions. Then, the beam is routed to a hybrid pair of galvanometric mirrors (GMs). One is a fast resonant mirror (CRS-8 kHz, Cambridge Technology) used to digitally generate the virtual light-sheet scanning (frequency 8 kHz) the larva along the rostro-caudal direction. The second GM is a closed-loop mirror (6215H, Cambridge Technology) used to displace the light-sheet along the dorso-ventral direction. The scanned beam is driven by a scan lens and a tube lens into a polarizing beam splitter, which diverts the light alternatively into either of the two excitation arms, according to the instantaneous polarization state imposed by the EOM. In order to maximize fluorescence collection, after the beam splitter in one of the two arms a half-wave plate is used to rotate the light polarization plane so that light coming from both the excitation paths is polarized parallel to the table surface . Through a twin relay system, the beams are ultimately routed into the excitation objectives (XLFLUOR4X/340/0,28, Olympus). The excitation light is focused inside a custom fish water-filled imaging chamber, heated to 28.5 °C. The fine positioning of the sample under the detection objective is performed with three motorized stages. The fluorescence emitted by the sample is collected with a water-immersion objective (XLUMPLFLN20XW, Olympus, NA = 1). Finally, a relay system brings the collected signal to an electrically tunable lens (ETL; EL-16-40-TC-VIS-5D-C, Optotune) which performs remote axial scanning of the detection objective focal plane in sync with the light-sheet closed-loop displacement. The signal collected is filtered (FF01-510/84-25, Semrock) to select green emission. The filtered light reaches an air objective (UPLFLN10X2, Olympus, NA = 0.3), which demagnifies the image onto a subarray (512 × 512 pixels) of an sCMOS camera (ORCA-Flash4.0 V3, Hamamatsu) working at 16-bit depth of integer gray levels. The final magnification of the imaging system is 3×, with a resulting pixel size of 2.2 μm. Below the transparent PMMA bottom of the imaging chamber, a high-speed CMOS camera (Blackfly S USB3, FLIR) equipped with a varifocal objective lens (employed at 50 mm; YV3.3x15SA-2, Fujinon) is positioned to perform behavioral imaging (tail deflections) during light-sheet imaging. Illumination for behavioral imaging is provided by an 850 nm LED (M850L3, Thorlabs) positioned at an angle above the imaging chamber. A bandpass filter (FF01-835/70-25, Semrock) is placed in front of the objective lens for blocking high-intensity light from the 920 nm light-sheet (see Supplementary Fig. ). Recordings are performed using a 300 × 300 pixels subarray of the camera chip, covering the entire larval body. This configuration allows to achieve sufficient magnification (pixel size: 15.4 μm) and contrast to enable live tail tracking. The 3D light-targeting system employs a 1064 nm pulsed laser (FP-1060-5-fs Fianium FemtoPower, NKT Photonics, Birkerød, Denmark) as an excitation source. The output power (max. 5 W) is attenuated and conveyed to a half-wave plate, which is employed to adjust the polarization of the beam, before the first AOD stage (DTSXY-400 AA Opto Electronic, Orsay, France) is reached. The output beam is then coupled with the second AOD stage through two 1:1 relay systems. From the exit of the second stage, by means of a 1:1 relay system, the beam is routed to a pair of galvanometric mirrors (GVS112, Thorlabs). The scanned beam is then optically coupled with a scan lens (AC254-100-B, Thorlabs) and a tube lens ( F = 300 mm, in turn formed by two achromatic doublets - AC254-150-C-MLE, F = 150 mm by Thorlabs, so customized to avoid aberrations). The excitation light is finally deflected by a dichroic mirror (DMSP926B, Thorlabs) toward the back pupil of the illumination objective, which is also employed by the imaging system for fluorescence detection. The detailed optical characterization of the 2P light-sheet system was described in a previous work of our group . Summarizing, each of the light sheets coming from the two excitation arms has a transversal full width at half maximum (FWHM) at waist of 6 µm and a longitudinal FWHM of 327 µm. The lateral FWHM of the detection PSF is 5.2 µm. Herein, we describe the optical performance of the AOD-based light-targeting system used for optogenetic stimulation. When using AODs to move the beam away from its native focus, the illumination axial displacement—or defocus—has a linear relation with the chirp parameter α, i.e., the rate of frequency change of the driving radio waves . We thus measured the axial displacement of the focused beam as a function of α by illuminating a fluorescent solution (Sulforhodamine 101; S7635, Sigma-Aldrich) and localizing the maximum fluorescent peak in the volume as a function of α, which ranged from −1 MHz/µs to 1 MHz/µs (step size 0.1 MHz/µs). For each chirp configuration, the ETL in detection path was used to obtain a 200-µm deep stack (step size: 1 µm) centered at the nominal focal plane of the illumination objective. Supplementary Fig. shows the axial position of the fluorescent intensity peak as a function of the chirp addressed, following an expected linear trend. We evaluated the conversion coefficient from the slope of the linear fit, which was 50.44 ± 3.45 µm/MHz/µs (mean ± sd). We also measured the amount of energy released on the sample as a function of the chirp parameter or, basically, as a function of the time spent illuminating axially displaced targets. Indeed, the beam would spend slightly different periods lighting spots displaced in different z -planes as the effective frequency ramping time is inversely proportional to the chirp parameter α imposed on the RF signals driving the AODs. As explained in detail in a previous work , we partially recovered this non-uniformity in the distribution of power deposited along the axial direction by repeatedly triggering equal frequency ramps within the desired dwell time (here, 20 µs each point), using what we called multi-trigger modality. With respect to the conventional single-trigger modality, we effectively multiplied the minimum energy deposited on different focal planes, while keeping a stable dwell time. Supplementary Fig. shows in black the usual light transmission distribution collected as a function of the chirp parameter (single-trigger modality) and in blue the distribution obtained with our multi-trigger approach. We then measured the point spread function (PSF) of the light-targeting system using subdiffraction-sized fluorescent beads (TetraSpeck microspheres, radius 50 nm; T7279, Invitrogen) embedded in agarose gel (1.5%, w/v) at a final concentration of 0.0025% (vol/vol). The measurements were performed on a field of view of 100 × 100 μm 2 , performing raster scans of 500 × 500 points. The objective was moved axially covering a 200 μm range ( z step: 1 μm) and the emitted signal was conveyed and collected on an auxiliary photomultiplier tube positioned downstream of the fluorescence-collecting objective. The radial and axial intensity profiles of 25 beads were computed using the open-source software ImageJ and fitted with Gaussian functions in Origin Pro 2021 (OriginLab Corp.) to estimate FWHM. Supplementary Fig. shows, as an example, the raw fluorescence distributions of 5 beads and the Gaussian fit corresponding to the average FWHM, plotted in red and black for the radial and axial PSF, respectively. We found them to be FWHMr = 0.81 ± 0.06 µm and FWHMa = 3.79 ± 0.66 µm (mean ± sd). This measurement was performed by driving the AODs with stationary RF signals. To evaluate the eventual illumination spatial distortions arising away from the nominal focal plane of the objective, we repeated the same PSF measurement for different chirps or, in other words, for different AOD-controlled axial displacements (80 µm range, step size of 20 µm). The average FWHM obtained for the bead intensity distribution is shown in Supplementary Fig. . The radial PSF of the system remains approximately constant as a function of the chirp parameter. A small change is due to the chromatic dispersion affecting the laser beam interacting with the crystal. The deflection angle induced by the AODs on the incident beam is frequency and wavelength dependent. This means that a broadband laser is straightforwardly spatially dispersed by the crystal and that the frequency variations can slightly affect this distortion. Moreover, the axial PSF tends to become slightly oblong with increasing axial displacement. This effect is attributable to the temporal dispersion affecting a short-pulsed laser beam interacting with the crystal. This temporal broadening reduces the axial 2P excitation efficiency, generating a larger axial PSF. This effect is more evident when a chirp is applied to the RF signals driving the AODs. Under these conditions, the beam reaches the objective back-pupil in a non-collimated state. Future efforts will be devoted to the compensation of chromatic aberration and temporal dispersion, for example employing a highly dispersive prism upstream of AODs . The double Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) zebrafish line was obtained from outcrossing the Tg(elavl3:H2B-GCaMP6s) , and the Tg(elavl3:ReaChR-TagRFP) , lines on the slc45a2 b4/- heterozygous albino background , which we previously generated. The double transgenic line expresses the fluorescent calcium reporter GCaMP6s (nucleus) and the red-shifted light-activatable cation channel ReaChR (plasma membrane) in all differentiated neurons. ReaChR is expressed as a fusion peptide with the red fluorescent protein TagRFP to ensure its localization. Zebrafish strains were reared according to standard procedures , and fed twice a day with dry food and brine shrimp nauplii ( Artemia salina ), both for nutritional and environmental enrichment. For the experiments, we employed N = 20, 5 dpf Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) and N = 13, 5 dpf Tg(elavl3:H2B-GCaMP6s) , both of which were on the slc45a2 b4/b4 homozygous albino background. Zebrafish larvae used in the experiments were maintained at 28.5 °C in fish water (150 mg/L Instant Ocean, 6.9 mg/L NaH2PO4, 12.5 mg/L Na2HPO4, 1 mg/L methylene blue; conductivity 300 μS/cm, pH 7.2) under a 14/10 light/dark cycle, according to standard protocols . Experiments involving zebrafish larvae were carried out in compliance with European and Italian laws on animal experimentation (Directive 2010/63/EU and D.L. 4 March 2014, n.26, respectively), under authorization n.606/2020-PR from the Italian Ministry of Health. To select calcium reporter/opsin-expressing larvae for use in the experiments, 3 dpf embryos were subjected to fluorescence screening. The embryos were first slightly anesthetized with a bath in tricaine (160 mg/L in fish water; A5040, Sigma-Aldrich) to reduce movement. Using a stereomicroscope (Stemi 508, Carl Zeiss) equipped with LEDs for fluorescence excitation (for GCaMP6s: blue LED, M470L3; for TagRFP: green LED, M565L3, both from Thorlabs) and fluorescence filters to block excitation light (for GCaMP6s: FF01-510/84-25; for TagRFP: FF01-593/LP-25, both from Semrock), embryos were selected according to the presence of brighter green/red fluorescent signals in the central nervous system. Screened embryos were transferred to a Petri dish containing fresh fish water and kept in an incubator at 28.5 °C until 5 dpf. Zebrafish larvae were mounted as previously described . Briefly, each larva was transferred into a reaction tube containing 1.5% (w/v) low-gelling temperature agarose (A9414, Sigma-Aldrich) in fish water, maintained fluid on a heater set at 38 °C. Using a plastic pipette, larvae were then placed on a microscope slide inside a drop of melted agarose. Before gel polymerization, their position was adjusted using a couple of fine pipette tips for the dorsal portion to face upwards. To avoid movement artifacts during the measurements, larvae were paralyzed by a 10-min treatment with 2 mM d-tubocurarine (93750, Sigma-Aldrich), a neuromuscular blocker. For tail-free preparations, upon gel polymerization, agarose caudal to the swimming bladder was removed using a scalpel. In this case, no paralyzing agent was applied. Mounted larvae were then placed inside the imaging chamber filled with fish water and thermostated at 28.5 °C for the entire duration of the experiment. Confocal imaging of a 5 dpf Tg(elavl3:H2B-GCaMP6s; elavl3:ReaChR-TagRFP) larva on an albino background was performed to evaluate the spatial expression of the two proteins. The larva was mounted in agarose as described above and deeply anesthetized with tricaine (300 mg/L in fish water). We employed a commercial confocal microscope (T i 2, Nikon) equipped with two continuous wavelength lasers emitting at 488 and 561 nm for GCaMP6s and TagRFP excitation, respectively. Imaging was performed using a 10× objective, allowing the entire head of the animal to fit into the field of view. Using a piezo-electric motor (PIFOC, Physik Instrumente - PI), the objective was moved at 182 consecutive positions ( z step: 2 μm) to acquire the volume of the larval head. Head restrained larvae, capable of performing wide tail deflections, were imaged from below the 2P LSFM imaging chamber using a dedicated high-speed camera (see Optical setup for details). Images were streamed at 300 Hz via a USB3 connection to a workstation running a custom tool for live tail movement tracking, developed using the open-source Python Stytra package . Larval tail length was divided into 9 segments, and the sum of their relative angles was employed to quantify tail deflection. Tail movements of both ReaChR + and ReaChR − larvae were tracked for 200 s. During the first half 2P LSFM imaging was off (imaging OFF). During the second half larvae were subjected to whole-brain light-sheet imaging (imaging ON) with the same parameters described in the previous section. Each larva measured underwent 3 consecutive 200-s simultaneous whole-brain/behavioral recordings (inter-measurement interval less than 1 min). Whole-brain calcium imaging was performed at 2.5 Hz (a more than optimal volumetric rate considering the typical time constant of the exponential decay for the nuclear localized version of the GCaMP6s sensor τ: 3.5 s ) with 41 stacked z-planes spanning a depth of 200 μm. An interslice spacing of 5 μm was chosen because it coincides with the half width at half maximum of the detection axial PSF. Before each measurement, the scanning amplitude of the resonant galvo mirror was tuned to produce a virtual light-sheet with a length matching the size of the larval brain in the rostro-caudal direction. The laser wavelength was set to 920 nm to optimally excite the GCaMP6s fluorescence. Unless otherwise stated, the power at the sample of the 920 nm laser was set to 60 mW. Optogenetic stimulation was performed at 1064 nm with a laser power at the sample of 30 mW (unless otherwise specified). Before each experimental session, the 1064 nm stimulation laser was finely aligned to the center of the camera field of view. Then, by means of the galvo mirrors present in the stimulation path, the offset position of the stimulation beam was coarsely displaced in the x - y direction toward the center of the area to be stimulated. During the optogenetics experiment the stimulation volume was covered by discontinuously scanning the beam focus via the two pairs of AODs. A typical volume of 50 × 50 × 50 μm 3 was covered with 6250 points (point x-y density: 1 point/0.25 μm 2 ; z step: 5 μm) with a point dwell time of 20 μs (overall time: 125 ms). The medial plane of the stimulation volume (chirp = 0 MHz/μs, null defocus) was adjusted to overlap with the medial plane of the LHb. Unless otherwise stated, each stimulus consisted of four complete cycles of the entire volume, lasting 500 ms. Each stimulation trial consisted of 100 s of whole-brain calcium imaging, during which 5 optogenetic stimuli (interstimulus interval: 16 s, based on the characterization experiments performed, in order to trigger activation events only after the end of the previous calcium transient) were applied at the same volumetric site. Six trials were performed on each larva, with an intertrial interval ranging from 1 to 3 min. Overall, each larva was imaged for 10 min during which it received 30 stimuli. Preprocessing Whole-brain calcium imaging data were processed as follows. Images composing the hyperstacks were first 2 × 2 binned (method: average) in the x and y dimensions to obtain a quasi-isotropic voxel size (4.4 × 4.4 × 5 μm 3 ). Then, employing a custom tool written in Python 3, we computed the voxel-wise ΔF/F 0 of each volumetric recording, after background subtraction. F 0 was calculated using FastChrom’s baseline estimation method . Quantification of imaging crosstalk and optogenetic activation extent/specificity To quantify crosstalk during imaging we first considered different metrics to evaluate neuronal activity levels (Supplementary Fig. ). We computed the standard deviation (SD) over time, the number of calcium peaks per minute, and the average peak amplitude of each voxel composing the larval brain during 5 min of whole-brain calcium imaging (Supplementary Fig. ). For automatic calcium peaks identification, we set the following thresholds: minimum peak prominence 0.05; minimum peak FWHM 2.5 s, minimum peak distance 5 s. We found the SD to have improved sensitivity in discriminating between diverse conditions compared to the number of peaks per minute (Supplementary Fig. ). These results reflected those observed by adopting the average amplitude of calcium peaks (Supplementary Fig. ) as an activity metric. We thus employed SD over time as a proxy of neuronal activity levels since its results do not depend on predefined thresholds. Therefore, the distribution of SD values calculated for each brain was first normalized with respect to the total number of voxels and then pooled (method: average) according to the larval strain (ReaChR + and ReaChR − ). Similarly, the normalized distributions of SD values for ReaChR + and ReaChR − larvae subjected to 100 s of whole-brain imaging during which they received 5 photostimulations (1064 nm) were calculated to evaluate the effect of the optogenetic stimulation. Imaging crosstalk and optogenetic stimulation indices were calculated using the Hellinger distance as a measure of dissimilarity between two probability distributions P and Q: [12pt]{minimal} $$H(P,Q)=_{i=1}^{n}_{i}{Q}_{i}}}$$ H ( P , Q ) = 1 − ∑ i = 1 n P i Q i The errors in the Hellinger distances were calculated according to error propagation theory as follows: [12pt]{minimal} $$ H=_{i=1}^{n}_{i}^{2}}{{4H}^{2}}{{ }} {P}_{i}^{2}+_{i}^{2}}{{4H}^{2}}{{ }}{ Q}_{i}^{2}}$$ Δ H = ∑ i = 1 n Q i 2 4 H 2 ⋅ Δ P i 2 + P i 2 4 H 2 ⋅ Δ Q i 2 Finally, normalized distributions of SD values for ReaChR − larvae exposed either to imaging (100 s) only or to imaging and photostimulation (100 s and 5 stimuli at 1064 nm) were calculated to evaluate the specificity of the effect observed. Quantification of tail movements during whole-brain light-sheet imaging Tail deflection (i.e., sum of relative tail segments angles) time traces, were processed to detect and count the number of tail beats. In detail, deflection peaks were considered as tail beats if exceeding an absolute threshold of 20°. Consecutive tail deflections that did not come back to resting position for at least 0.5 s were considered part of the same movement. The relative number of tail beats during imaging ON (Fig. ) was calculated for each trial of each larva dividing the number of tail movements during the imaging ON period by that quantified during the imaging OFF period. To combine behavioral and brain activity recordings (Fig. ), the average fluorescence time trace of the hindbrain acquired at 2.5 Hz was first interpolated to match the frequency of behavioral recordings (300 Hz). Then, ΔF/F 0 was calculated as previously described. Characterization of stimulation-induced calcium transients To characterize neuronal activation as a function of stimulation parameters (scan time and laser power), we first extracted the voxel time series averaged over the entire stimulation site (i.e., left habenula) from 4D ΔF/F 0 hyperstacks. Time traces were windowed to isolate and align the three stimulation events contained in a single trial. Isolated calcium transients were analyzed using the peak analyzer function in Origin Pro 2021 (OriginLab Corp.) to obtain peak amplitude, rise/decay time (i.e., time from baseline to peak and time from peak to baseline, respectively) and duration values. Pooled peak duration data were obtained by first averaging three events of the same larva (intra-individual) and then averaging data between larvae (inter-individual). Activation probability and correlation maps Using a custom Python tool, we calculated the probability of each voxel composing the brain to be active in response to the optogenetic stimulation. For each stimulation event, a voxel was considered active if its change in fluorescence in a 2 s time window after the stimulation exceeded three standard deviations above its baseline level (2 s pre-stimulation). Only events in which the voxels inside the stimulation volume met the activation criterion were considered effective optogenetic stimulations. By iterating this process for all the stimulation events performed (on the same site of the same larva), we calculated the activation probability of each voxel as the number of times the voxel exceeded the threshold divided by the total number of valid stimulations. Employing a second Python tool, we then computed activity correlation maps showing Pearson’s correlation coefficient between each voxel and the activity extracted from the stimulation site (seed). The 3D maps of correlation and activation probability obtained were subsequently aligned. First, the acquired 4D hyperstacks were time averaged. Second, the resulting 3D stack of each larva was registered to a reference brain. Nonrigid image registration was performed using the open source software Computational Morphometry Toolkit (CMTK 3.3.1, https://www.nitrc.org/projects/cmtk/ ) and the ImageJ user interface , employing the command string (-awr 01 -X 52 -C 8 -G 80 -R 3 -A “--accuracy 1.6” -W “--accuracy 0.4”). The calculated morphing transformations were ultimately applied to the corresponding 3D maps. Following the zebrafish brain atlases , , the volumetric regions of interest (ROIs) used in the analysis were manually drawn onto the reference brain (employing ImageJ), based on anatomical boundaries. The 10 volumetric ROIs were then adopted to extract from each map the voxel-wise distribution of activation probability/correlation coefficient values used for further analyses. The binarized functional connectivity map shown in Fig. was obtained after applying a threshold on Pearson’s correlation coefficient to the average correlation map shown in Fig. . The 0.12 value adopted represented the correlation coefficient threshold separating significant from non-significant correlations among brain regions (see Fig. ). Cross-wavelet power spectrum analysis The possible coupling between the delineated brain ROIs and the stimulation site was also characterized in the spectral domain by quantifying and inspecting their cross-wavelet power spectral density (CPSD) . The wavelet transforms of the average activity signals extracted from each ROI were computed using the Morlet mother wavelet, adopting a central frequency f 0 = 1 Hz as time-frequency resolution parameter, and 256 voices per octave for fine frequency discretization. Spurious time-boundary effects were addressed by first applying a zero-padding scheme to the original time series, and then isolating the so-called cone of influence, i.e., the time–frequency region where boundary distortions in the CPSD estimates are negligible . Granger causality analysis The causal link between the activity of different brain regions was explored by analyzing their Granger causality . GC analysis among ΔF/F 0 time series of brain regions was performed in R, with the “lmtest” library . To select an appropriate lag order, we computed both the Akaike (AIC) and Bayesian (BIC) information criterions of the complete autoregressive model for each comparison (each trial and each possible regions pair) for lag orders from 1 to 8 (0.4–3.2 s). Then, for each comparison we selected the lag order associated with the minimum value of the information criterions. Finally, we computed the mode value of this list and used this unique lag order value for every comparison of the final GC analysis. The mode values based on both AIC and BIC resulted the same: a lag order equal to 2, which corresponds to a 0.8 s lag. For each larva, trial, pairs of regions’ activity and causality direction we computed the average F statistic value of the tests. Finally, multiplicity correction for the p-values was performed with a false discovery rate approach using the Benjamini–Hochberg method (GC analysis results are reported in Supplementary Data ). The F statistic was presented in Fig. as average values of all pairs having at least two significant trials. The F statistic in the graph of Fig. was presented as arrows color-mapped according to the average F value found between brain regions’ connections. Direction of the arrow indicates direction of the causality interaction, while arrow width represents the proportion of significant trials over the total. Only causal links having at least 33% of significant trials were depicted (see thresholded matrix in Supplementary Fig. ). Partial correlation analysis In order to gain insight into the directness of the interactions between brain regions, we analyzed the partial correlation between pairs of region-wise mean ΔF/F 0 time series, aiming to capture their residual coupling after the influence of all other regions was accounted for . Pairwise partial correlation coefficients were obtained as described by Han and colleagues . In detail, the partial correlation between a pair of brain regions (i.e., LHb-IPN, LHb-RHb, and T-RHb), A and B, was evaluated as the Pearson’s correlation coefficient between regressed time series ΔF/F 0 A,R and ΔF/F 0 B,R , suitably corrected for the contribution of each other regions’ mean activity signal. These time series were estimated by multiple regression on the original traces ΔF/F 0 A and ΔF/F 0 B , through the evaluation of the Moore-Penrose pseudoinverse of the remaining regions’ time series matrix, C: [12pt]{minimal} $${{{{}}}}_{{{{}}}}={{{{}}}}^{+}{{ }}{ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}$$ β A = C + ⋅ Δ F / F 0 A [12pt]{minimal} $${{{{}}}}_{{{{}}}}={{{{}}}}^{+}{{ }}{ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}$$ β B = C + ⋅ Δ F / F 0 B where C + is the Moore-Penrose pseudoinverse matrix: [12pt]{minimal} $${{{{}}}}^{+}={({{{{}}}}^{{{{}}}}{{{}}})}^{-1}{{{{}}}}^{{{{}}}}$$ C + = ( C T C ) − 1 C T here computed using the Python SciPy library . The regressed time series were then obtained as: [12pt]{minimal} $${ {{{}}}/{{{{}}}}_{0}}_{{{{}}},{{{}}}}={ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}-{{{}}}{{{{}}}}_{{{{}}}}$$ Δ F / F 0 A , R = Δ F / F 0 A − C⋅ β A [12pt]{minimal} $${ {{{}}}/{{{{}}}}_{0}}_{{{{}}},{{{}}}}={ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}-{{{}}}{{{{}}}}_{{{{}}}}$$ Δ F / F 0 B , R = Δ F / F 0 B − C⋅ β B The directness of the mutual interaction between two brain regions was finally detected from the presence of both statistically significant Pearson’s and partial correlation coefficients. When only the Pearson’s correlation is significant the interaction is defined as indirect. While in case only the partial correlation is significant, we are observing what is defined as pseudo-correlation . Results of partial correlation analysis can be found in Supplementary Data . Whole-brain calcium imaging data were processed as follows. Images composing the hyperstacks were first 2 × 2 binned (method: average) in the x and y dimensions to obtain a quasi-isotropic voxel size (4.4 × 4.4 × 5 μm 3 ). Then, employing a custom tool written in Python 3, we computed the voxel-wise ΔF/F 0 of each volumetric recording, after background subtraction. F 0 was calculated using FastChrom’s baseline estimation method . To quantify crosstalk during imaging we first considered different metrics to evaluate neuronal activity levels (Supplementary Fig. ). We computed the standard deviation (SD) over time, the number of calcium peaks per minute, and the average peak amplitude of each voxel composing the larval brain during 5 min of whole-brain calcium imaging (Supplementary Fig. ). For automatic calcium peaks identification, we set the following thresholds: minimum peak prominence 0.05; minimum peak FWHM 2.5 s, minimum peak distance 5 s. We found the SD to have improved sensitivity in discriminating between diverse conditions compared to the number of peaks per minute (Supplementary Fig. ). These results reflected those observed by adopting the average amplitude of calcium peaks (Supplementary Fig. ) as an activity metric. We thus employed SD over time as a proxy of neuronal activity levels since its results do not depend on predefined thresholds. Therefore, the distribution of SD values calculated for each brain was first normalized with respect to the total number of voxels and then pooled (method: average) according to the larval strain (ReaChR + and ReaChR − ). Similarly, the normalized distributions of SD values for ReaChR + and ReaChR − larvae subjected to 100 s of whole-brain imaging during which they received 5 photostimulations (1064 nm) were calculated to evaluate the effect of the optogenetic stimulation. Imaging crosstalk and optogenetic stimulation indices were calculated using the Hellinger distance as a measure of dissimilarity between two probability distributions P and Q: [12pt]{minimal} $$H(P,Q)=_{i=1}^{n}_{i}{Q}_{i}}}$$ H ( P , Q ) = 1 − ∑ i = 1 n P i Q i The errors in the Hellinger distances were calculated according to error propagation theory as follows: [12pt]{minimal} $$ H=_{i=1}^{n}_{i}^{2}}{{4H}^{2}}{{ }} {P}_{i}^{2}+_{i}^{2}}{{4H}^{2}}{{ }}{ Q}_{i}^{2}}$$ Δ H = ∑ i = 1 n Q i 2 4 H 2 ⋅ Δ P i 2 + P i 2 4 H 2 ⋅ Δ Q i 2 Finally, normalized distributions of SD values for ReaChR − larvae exposed either to imaging (100 s) only or to imaging and photostimulation (100 s and 5 stimuli at 1064 nm) were calculated to evaluate the specificity of the effect observed. Tail deflection (i.e., sum of relative tail segments angles) time traces, were processed to detect and count the number of tail beats. In detail, deflection peaks were considered as tail beats if exceeding an absolute threshold of 20°. Consecutive tail deflections that did not come back to resting position for at least 0.5 s were considered part of the same movement. The relative number of tail beats during imaging ON (Fig. ) was calculated for each trial of each larva dividing the number of tail movements during the imaging ON period by that quantified during the imaging OFF period. To combine behavioral and brain activity recordings (Fig. ), the average fluorescence time trace of the hindbrain acquired at 2.5 Hz was first interpolated to match the frequency of behavioral recordings (300 Hz). Then, ΔF/F 0 was calculated as previously described. To characterize neuronal activation as a function of stimulation parameters (scan time and laser power), we first extracted the voxel time series averaged over the entire stimulation site (i.e., left habenula) from 4D ΔF/F 0 hyperstacks. Time traces were windowed to isolate and align the three stimulation events contained in a single trial. Isolated calcium transients were analyzed using the peak analyzer function in Origin Pro 2021 (OriginLab Corp.) to obtain peak amplitude, rise/decay time (i.e., time from baseline to peak and time from peak to baseline, respectively) and duration values. Pooled peak duration data were obtained by first averaging three events of the same larva (intra-individual) and then averaging data between larvae (inter-individual). Using a custom Python tool, we calculated the probability of each voxel composing the brain to be active in response to the optogenetic stimulation. For each stimulation event, a voxel was considered active if its change in fluorescence in a 2 s time window after the stimulation exceeded three standard deviations above its baseline level (2 s pre-stimulation). Only events in which the voxels inside the stimulation volume met the activation criterion were considered effective optogenetic stimulations. By iterating this process for all the stimulation events performed (on the same site of the same larva), we calculated the activation probability of each voxel as the number of times the voxel exceeded the threshold divided by the total number of valid stimulations. Employing a second Python tool, we then computed activity correlation maps showing Pearson’s correlation coefficient between each voxel and the activity extracted from the stimulation site (seed). The 3D maps of correlation and activation probability obtained were subsequently aligned. First, the acquired 4D hyperstacks were time averaged. Second, the resulting 3D stack of each larva was registered to a reference brain. Nonrigid image registration was performed using the open source software Computational Morphometry Toolkit (CMTK 3.3.1, https://www.nitrc.org/projects/cmtk/ ) and the ImageJ user interface , employing the command string (-awr 01 -X 52 -C 8 -G 80 -R 3 -A “--accuracy 1.6” -W “--accuracy 0.4”). The calculated morphing transformations were ultimately applied to the corresponding 3D maps. Following the zebrafish brain atlases , , the volumetric regions of interest (ROIs) used in the analysis were manually drawn onto the reference brain (employing ImageJ), based on anatomical boundaries. The 10 volumetric ROIs were then adopted to extract from each map the voxel-wise distribution of activation probability/correlation coefficient values used for further analyses. The binarized functional connectivity map shown in Fig. was obtained after applying a threshold on Pearson’s correlation coefficient to the average correlation map shown in Fig. . The 0.12 value adopted represented the correlation coefficient threshold separating significant from non-significant correlations among brain regions (see Fig. ). The possible coupling between the delineated brain ROIs and the stimulation site was also characterized in the spectral domain by quantifying and inspecting their cross-wavelet power spectral density (CPSD) . The wavelet transforms of the average activity signals extracted from each ROI were computed using the Morlet mother wavelet, adopting a central frequency f 0 = 1 Hz as time-frequency resolution parameter, and 256 voices per octave for fine frequency discretization. Spurious time-boundary effects were addressed by first applying a zero-padding scheme to the original time series, and then isolating the so-called cone of influence, i.e., the time–frequency region where boundary distortions in the CPSD estimates are negligible . The causal link between the activity of different brain regions was explored by analyzing their Granger causality . GC analysis among ΔF/F 0 time series of brain regions was performed in R, with the “lmtest” library . To select an appropriate lag order, we computed both the Akaike (AIC) and Bayesian (BIC) information criterions of the complete autoregressive model for each comparison (each trial and each possible regions pair) for lag orders from 1 to 8 (0.4–3.2 s). Then, for each comparison we selected the lag order associated with the minimum value of the information criterions. Finally, we computed the mode value of this list and used this unique lag order value for every comparison of the final GC analysis. The mode values based on both AIC and BIC resulted the same: a lag order equal to 2, which corresponds to a 0.8 s lag. For each larva, trial, pairs of regions’ activity and causality direction we computed the average F statistic value of the tests. Finally, multiplicity correction for the p-values was performed with a false discovery rate approach using the Benjamini–Hochberg method (GC analysis results are reported in Supplementary Data ). The F statistic was presented in Fig. as average values of all pairs having at least two significant trials. The F statistic in the graph of Fig. was presented as arrows color-mapped according to the average F value found between brain regions’ connections. Direction of the arrow indicates direction of the causality interaction, while arrow width represents the proportion of significant trials over the total. Only causal links having at least 33% of significant trials were depicted (see thresholded matrix in Supplementary Fig. ). In order to gain insight into the directness of the interactions between brain regions, we analyzed the partial correlation between pairs of region-wise mean ΔF/F 0 time series, aiming to capture their residual coupling after the influence of all other regions was accounted for . Pairwise partial correlation coefficients were obtained as described by Han and colleagues . In detail, the partial correlation between a pair of brain regions (i.e., LHb-IPN, LHb-RHb, and T-RHb), A and B, was evaluated as the Pearson’s correlation coefficient between regressed time series ΔF/F 0 A,R and ΔF/F 0 B,R , suitably corrected for the contribution of each other regions’ mean activity signal. These time series were estimated by multiple regression on the original traces ΔF/F 0 A and ΔF/F 0 B , through the evaluation of the Moore-Penrose pseudoinverse of the remaining regions’ time series matrix, C: [12pt]{minimal} $${{{{}}}}_{{{{}}}}={{{{}}}}^{+}{{ }}{ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}$$ β A = C + ⋅ Δ F / F 0 A [12pt]{minimal} $${{{{}}}}_{{{{}}}}={{{{}}}}^{+}{{ }}{ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}$$ β B = C + ⋅ Δ F / F 0 B where C + is the Moore-Penrose pseudoinverse matrix: [12pt]{minimal} $${{{{}}}}^{+}={({{{{}}}}^{{{{}}}}{{{}}})}^{-1}{{{{}}}}^{{{{}}}}$$ C + = ( C T C ) − 1 C T here computed using the Python SciPy library . The regressed time series were then obtained as: [12pt]{minimal} $${ {{{}}}/{{{{}}}}_{0}}_{{{{}}},{{{}}}}={ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}-{{{}}}{{{{}}}}_{{{{}}}}$$ Δ F / F 0 A , R = Δ F / F 0 A − C⋅ β A [12pt]{minimal} $${ {{{}}}/{{{{}}}}_{0}}_{{{{}}},{{{}}}}={ {{{}}}/{{{{}}}}_{0}}_{{{{}}}}-{{{}}}{{{{}}}}_{{{{}}}}$$ Δ F / F 0 B , R = Δ F / F 0 B − C⋅ β B The directness of the mutual interaction between two brain regions was finally detected from the presence of both statistically significant Pearson’s and partial correlation coefficients. When only the Pearson’s correlation is significant the interaction is defined as indirect. While in case only the partial correlation is significant, we are observing what is defined as pseudo-correlation . Results of partial correlation analysis can be found in Supplementary Data . To guarantee reproducibility of the findings and avoid bias, the larvae employed in the experiments never belonged to a single batch of eggs. No a priori sample size calculation was performed. The sample size employed was justified by the high grade of consistency in the results obtained from different larvae. The expression pattern of GCaMP6s and ReaChR were evaluated in N = 1 ReaChR + larva by confocal imaging. Crosstalk activation of ReaChR by 920 nm excitation light-sheet imaging was evaluated on N = 3 ReaChR + and N = 3 ReaChR − larvae, in the brain activity experiment, and N = 4 ReaChR + and N = 4 ReaChR − larvae in the combined brain/behavioral activities experiment. The effect of optogenetic stimulation was evaluated on N = 6 ReaChR + and N = 6 ReaChR − larvae. Characterization of optogenetically induced calcium transients as a function of stimulation settings was performed on N = 4 ReaChR + larvae ( n = 3 calcium transients per larva). The activation probability, correlation, and causality were evaluated on N = 6 ReaChR + ( n = 30 stimulations per larva). OriginPro 2021 (OriginLab Corp.) was used to carry out all the statistical analyses. Unless otherwise stated, results were considered statistically significant if their corresponding p -value was less than 0.05 (* P < 0.05; ** P < 0.01; *** P < 0.0001). Both intergroup and intragroup statistical significance of imaging crosstalk (Fig. and Supplementary Fig. ) were performed using two-way ANOVA (factors: zebrafish strain, imaging power) followed by post-hoc comparisons with Tukey’s method. Two-way ANOVA and Tukey’s post-hoc comparison were employed also for quantifying the statistical significance of tail beats between imaging OFF and ON conditions (Fig. ; factors: zebrafish strain, imaging presence). For intergroup statistical evaluations of both activation probability (Fig. ) and Pearson’s correlation coefficient (Fig. ), we first verified the normality distribution of data using the Shapiro-Wilk test (see Supplementary Fig. for test results) and then performed one way ANOVA (factor: brain region), followed by post-hoc comparisons employing Tukey’s method. Statistical comparisons of relative number of tail beats during 920 nm imaging (Fig. ) and median SD values to evaluate the effect of optogenetic stimulation (Fig. and Supplementary Fig. ) were performed using unpaired t test. Statistical comparisons of the average distributions of SD (Fig. ) and Pearson’s correlation coefficient (Fig. ) values were performed with the two-sample Kolmogorov-Smirnov test (KS test), applying the Bonferroni correction ( α = 0.05/3 = 0.01667, in both cases). Further information on research design is available in the linked to this article. Peer Review File Supplementary Information Description of Additional Supplementary Files Supplementary Data 1 Supplementary Data 2 Supplementary Data 3 Supplementary Movies Reporting Summary |
Open Label Vancomycin in Primary Sclerosing Cholangitis-Inflammatory Bowel Disease: Improved Colonic Disease Activity and Associations With Changes in Host–Microbiome–Metabolomic Signatures | caf753ff-8df0-4650-be7b-e3881932bf86 | 11831226 | Biochemistry[mh] | Between 2% and 14% of patients with inflammatory bowel disease (IBD) develop primary sclerosing cholangitis (PSC). In turn, most individuals with PSC develop colonic IBD. Although rare, the development of PSC is a critical juncture for IBD patients, being associated with heightened risks of colectomy and colorectal cancer (necessitating annual colonoscopic surveillance ), hepatobiliary malignancy, need for liver transplantation, and all-cause mortality. Liver transplantation is the only life-extending intervention for patients; however, recurrent disease develops among 1 in 3 patients, and post-transplant IBD can relapse/develop de novo in up to 40% of transplant recipients. In turn, IBD activity and the timing of colectomy have been shown to impact liver-related outcomes, prior to and following liver transplantation. The etiology of PSC is unknown, but the association with IBD has generated several pathogenic hypotheses in which immune dysregulation, gut microbial changes, and alterations in bile acid (BA) homeostasis are proposed to contribute. The mechanisms of classical IBD and PSC-IBD share some similarities. For instance, data from our group and others highlight the role of heightened IL-17 (interleukin-17) secretor cell responses to pathogen stimulation. However, several differences are evident epidemiologically, , , pathologically, and phenotypically. Moreover, genome-wide association studies suggest that the genetic architecture differs, with correlation modeling estimating an ulcerative colitis (UC) comorbidity rate of only 1.6% in patients with PSC, compared with the 70% rate seen in clinical practice. This degree of ‘missing heritability’ indicates a role for other contributory factors. We have previously shown that the composition and function of mucosa-adherent gut microbiota are also distinct in PSC patients compared with individuals with UC alone and healthy controls ; an observation that has been robustly and externally validated. Furthermore, analysis of the colonic mucosal transcriptome shows clear disturbances in BA homeostasis, supporting a causative role of gut microbial alterations in disease pathogenesis. Pilot studies using the orally administered, non-absorbable antibiotic vancomycin also demonstrate a reduction in serum alkaline phosphatase (ALP) values (the biochemical hallmark of PSC). Moreover, the observational cohort series showed oral vancomycin (OV) was able to induce and maintain remission in IBD activity among children with PSC. , In this interventional study, we used an intensive multi-omic approach to uncover the mechanisms by which OV attenuates colonic mucosal inflammation in patients with PSC and active IBD. In so doing, we explore biological changes in host pathophysiology, alongside shifts in gut microbiota, in an attempt to further understanding of this complex disease. 2.1. Participants We conducted a single-arm interventional study of open-label OV treatment in patients with PSC and active colonic IBD. The overarching goal was to quantify the proportion of patients who attained clinical remission from an IBD perspective, and identify the associated changes in host mucosal biology associated with OV treatment. The study was conducted from February 2022 to November 2022 ( NCT05376228 ). Patients aged ≥18 years with a diagnosis of PSC and concomitant pancolonic IBD were screened for eligibility at a single high-volume center. Inclusion criteria were mild to moderately active colitis based on a partial Mayo colitis score of ≥3 and ≤6 and commitment to participate in scheduled, standard-of-care lower gastrointestinal endoscopy (as part of disease assessment and annual colorectal cancer surveillance). , , Exclusion criteria for study participation were any active infectious cause of diarrhea (including Clostridioides difficile toxin-positive stool); an isolated ileal, right-sided or rectal phenotype of PSC-IBD; use of antibiotics or probiotics in the 3 months prior to screening; presence of stricturing, fistulating, or perianal IBD; a history of small bowel or colonic resection; a change or initiation of corticosteroids in the 2 weeks prior to screening; commencement of an immunomodulator or advanced therapy regimen in the prior 3 months; a history of intolerance to vancomycin; and/or evidence of hepatic decompensation in the 3 months prior to screening. Liver transplant recipients with evidence of recurrent PSC were allowed to participate in the absence of other exclusion criteria, provided inclusion criteria were met. The sample size for this trial was determined empirically, based on available research grant funding. As the study was exploratory in nature, it was not powered to detect specific efficacy endpoints, but rather to evaluate mechanistic effects of OV in PSC-IBD, and the association with specific clinical outcomes. 2.2. Study design Screening for the study and written consent were obtained from eligible patients up to 2 weeks prior to their scheduled annual surveillance colonoscopy (Week −2). At baseline (Week 0), the participant’s annual surveillance colonoscopy was performed, the total Mayo score was recorded, and up to 8 biopsies from the sigmoid colon were collected for fulfilling translational research objectives. Individuals with evidence of active colitis (endoscopic Mayo score of 1 or 2 and a partial Mayo score of ≥3 and ≤6)were treated with 4 weeks of OV 125 mg 4 times a day (QID), followed by 4 weeks of treatment withdrawal (Week 8) . Following completion of the treatment course (Week 4), a sigmoidoscopy was performed for a reassessment of endoscopic disease activity, and for collection of post-treatment sigmoid colon biopsies. Stool samples were collected at the baseline visit (prior to the patient taking bowel preparation for colonoscopy), Weeks 2, 4, and 8 to study metagenomics and metatranscriptomics (collected in DNAGenotek OMR-200 stool collection kits), short-chain fatty acids (SCFAs) and fecal BA profiling (collected in ME-200 collection kits), in addition to analysis of fecal calprotectin values. Serum was collected at the same timepoints for measurement of liver biochemistry (bilirubin, albumin, ALP, and alanine transaminase [ALT]), alongside clinical characteristics and the partial Mayo colitis score. Patients who experienced a significant increase in their partial Mayo score (defined as a >30% increase from baseline) during the study period, or based on clinician discretion, would exit the study and receive escalation in IBD treatment as per routine standard of care. An unselected, random selection of participants ( n = 6) also provided stool samples for microbiological culture, to determine if OV treatment led to the selection of vancomycin-resistant enterococci (Slanetz and Bartley agar [Oxoid]; treated with 8 µg/ml vancomycin). 2.3. Clinical outcome analysis The primary efficacy outcome was the induction of clinical remission at Week 4, the end of OV treatment, as defined in contemporary clinical trial practice. Specifically, this was by the modified Mayo colitis score being <2, with a stool frequency subscore ≤1, rectal bleeding score (RBS) = 0, and an endoscopic score ≤1. Given the short duration of follow-up, the term herein references induction of ‘short-term’ clinical remission, and is used to reflect on-treatment changes at 4 weeks, the observed improvements, all while acknowledging that longer-term follow-up is required to assess more sustained or durable remission. Key secondary efficacy outcomes were assessed at Weeks 4 and 8, and included changes in fecal calprotectin, the partial Mayo colitis score, the total Mayo colitis score, and serum ALP, ALT, and bilirubin values. 2.3.1. Data presentation and statistical analysis Continuous variables are shown as mean values and SD unless otherwise specified. Categorical variables are presented as raw numbers and percentages. The paired sample Wilcoxon test and analysis of variance (ANOVA) were conducted between different study timepoints to assess changes in fecal calprotectin, Mayo colitis scores and subscores, and in liver biochemistry following the treatment and subsequent withdrawal phases of OV treatment. Differences between groups were considered significant at a value of p < 0.05. 2.4. Stool metagenomic and metatranscriptomic analysis DNA and RNA were extracted from stool preserved in Omnigene tubes using Spin Column technology via a modified ZymoBIOMICS DNA/RNA Mini Kit protocol. The integrity of the isolated RNA was determined using the Bioanalyzer (Agilent) and only samples with RNA integrity number >7 were used. Libraries from extracted DNA for metagenomics were prepared using NEBNext ® Ultra™ II FS DNA Library Prep Kit following the protocol recommended by the manufacturer. Single index, paired-end shotgun metagenomic sequencing was performed on the NextSeq2000 platform. For metatranscriptomics, the extracted total RNA was first ribo-depleted using the Illumina Ribo-Zero Plus Microbiome rRNA Depletion Kit prior to library preparation with the Illumina Stranded Total RNA prep following the protocol recommended by the manufacturer. Paired-end, dual-indexed run shotgun RNA sequencing was performed on the NextSeq2000 platform. The library preparations and sequencing were done as a single batch with appropriate negative controls to reduce potential confounders. From the 60 stool samples collected at specified timepoints of the study, a total of 1.4 billion reads from shotgun metagenomic sequencing and of 950 million reads from shotgun metatranscriptomic sequencing were generated. Both metagenomic and metatranscriptomic sequence reads were processed with the KneadData v0.5.1 quality control pipeline ( http://huttenhower.sph.harvard.edu/kneaddata ), which uses the Trimmomatic to remove adapter sequences and low-quality bases and Bowtie2 for decontamination to remove host DNA/RNA and ribosomal RNA. In addition, metatranscriptomic reads were filtered against the human transcriptome and the SILVA database. Following filtering and quality control (QC), an average of 18 million paired-end metagenomic reads per sample were processed for assembly and reference-based profiling and an average of 12 million paired-end metatranscriptomic reads per sample were processed for downstream analysis. Bacterial taxonomic annotations from the metagenomic sequences were performed using MetaPhlAn4 which uses clade-specific markers that provide bacterial, archaeal, viral, and eukaryotic quantification at the species level using default parameters for reported relative abundances. Species with relative abundance <0.1% in at least 5 samples were excluded from the study. Functional profiling was derived from metatranscriptomic (active) and metagenomes (potential) using HUMAnN 3.0. For metatranscriptomic mapping the taxonomic profile of the corresponding metagenome was used an additional input to HUMAnN 3.0 via the --taxonomic-profile flag. This guaranteed that RNA reads are mapped to any species’ pangenomes detected in the metagenome rather than using derived species pangenomes from the RNA read. Briefly for both metagenomes and metatranscriptomes the translated search using DIAMOND was mapped against UniRef90 following which hits were counted per gene family and normalized for length and alignment quality. Gene families were regrouped into Kyoto Encyclopaedia of Genes and Genomes (KEGG) orthologs, enzyme classifications, and gene ontologies. Gene family abundances from both the nucleotide and the translated searches are then combined into structured pathways from MetaCyc and sum-normalized to relative abundances. All feature abundances that did not exceed 0.1% of the data with a minimum prevalence of 10% of the total samples were excluded from further analysis. Additionally, the ratio of transcript expression against gene abundance was measured to quantify changes in expression while controlling for gene copy number. Alpha diversity analysis was performed using the Shannon diversity index, and Bray–Curtis dissimilarities between groups were calculated using the Adonis function from the vegan R package in PERMANOVA analyses. Principal Coordinates Analysis (PCoA), non-metric multidimensional scaling (NMDS) analysis dimensionality reduction analysis (species only), and sample cluster analysis were performed. Lastly, Microbiome Multivariable Associations with Linear Models (MaAsLin2) was used to quantify paired differences between the relative abundance of taxa, genes, and pathways between study timepoints and fecal calprotectin levels. Due to the hypothesis-generating and exploratory nature of this study, an FDR (false discovery rate)-corrected p -value threshold of ≤0.1 was used. The Spearman rank correlation coefficient (rho) was used to assess correlations between taxa, genes, fecal BA concentrations, and fecal calprotectin levels. 2.5. Mucosally adherent microbial 16S rRNA gene profiling (metataxonomics) DNA and RNA were extracted from colonic mucosal biopsies taken at baseline and Week 4 of the study using a modified Qiagen AllPrep DNA/RNA Mini Kit protocol that included mechanical lysis and on-column DNAse digestion. The RNA extracted was passed on for host mucosal transcriptomics as described later. The DNA extracted was subjected to 16S rRNA gene amplification and sequencing using the Earth Microbiome Project protocol. Briefly, 16S rRNA genes were amplified in technical duplicates with primers targeting the 16S rRNA V4 region (515F–806R) using a 1-step, single-indexed PCR approach. As with DNA/RNA extraction, 16S rRNA gene PCR was done in a batch with appropriate negative controls. Paired-end sequencing (2 × 250 bp) was performed on the Illumina MiSeq platform (Illumina, San Diego, USA). A total of 7.2 million reads (240 324 reads/sample) and 1450 ASVs were obtained after QC using the Quantitative Insights Into Microbial Ecology 2 (QIIME2) pipeline. Taxonomy was assigned against the Silva-132-99% OTUs database. The functional profiles of microbial communities were inferred using PICRUSt2-derived relative MetaCyc and KEGG pathway analysis. Differences in taxa and inferred functional profiles were assessed using MaAsLin2 as described previously. 2.6. Host mucosal transcriptomic analysis The RNA extracted from colonic mucosal biopsies using the protocol described earlier was subjected to Ribo-Zero Gold rRNA Removal Kit (Illumina, San Diego, USA) to remove contaminating ribosomal RNA, and SMARTer Stranded RNA-Seq kit (Takara, Japan) was used for library construction. Paired-end 75 bp sequencing was performed using NextSeq 500/550 v2 kit (Illumina, San Diego, USA). After the removal of low-quality and ambiguous reads an average of 15 million reads per sample were processed for downstream analysis. Reads obtained were quality controlled with FastQC and Trimmomatic. , Contaminating ribosomal RNA reads were removed using Bowtie2 and reads were mapped to the human genome sequence database (GRCh38) using STAR and quantified with featureCounts. , , Genes were filtered and differential gene expression was analyzed based (FDR-corrected p ≤ 0.05) using edgeR. Gene ontology and KEGG/Reactome pathway analysis was conducted using Camera for competitive gene set testing. ClueGo was used to functionally group gene ontology and pathway annotation networks. 2.7. Fecal metabolomics 2.7.1. Bile acid profiling Reversed-phase (RP) ultra-high performance liquid chromatography–mass spectrometry (UHPLC–MS) was performed on OmniMet stool-preservative mixture for BA profiling. The OmniMet Gut ME-200 tubes were shaken for 1 minute and centrifuged for 20 minutes at 3214 g , 4°C. The supernatant (500 µL) was transferred to a micro-centrifuge filter (0.22 µm, nylon, Costar) and centrifuged for 20 minutes at 16 000 g , 4°C. The samples were then prepared as previously described. Ultra-high performance liquid chromatography–mass spectrometry analyses were performed using an ACQUITY UPLC (Waters Corp., Milford, MA, USA) coupled to a Xevo G2-S TOF mass spectrometer (Waters Corp., Manchester, UK) via a Z-pray electrospray ionization (ESI) source. Chromatographic separation was conducted in an ACQUITY BEH C8 column (1.7 µm, 100 mm × 2.1 mm), thermostated at 60°C. The mobile phases consisted of ACN:H 2 O 1:10 (v/v) with 1 mM ammonium acetate and pH 4.15 adjusted with acetic acid (A) and ACN:IPA (acetonitrile:isopropyl alcohol) 1:1 (v/v) (B) with a flow rate of 0.6 ml/min. The chromatographic gradient elution program was conducted as previously described. , Two µl injections of prepared samples were made to the system. Mass spectrometry was performed in negative ESI mode using the following parameters: capillary voltage 1.5 kV, cone voltage 60 V, source temperature 150°C, desolvation temperature 600°C, desolvation gas flow 1000 L/h, and cone gas flow 150 L/h. To ensure mass accuracy, a lock-spray interface was used, with leucine enkephalin { m / z 554.2615 ([M−H] − )} solution used as the lock mass at a concentration of 200 ng/µl and a flow rate of 15 µL/min. Quality control samples were prepared by pooling equal volumes of the fecal filtrates. Quality control samples were used as an assay performance monitor and as a proxy to remove features with high variation. Quality control samples were also spiked with mixtures of BA standards (81 BA standards including 44 non-conjugated, 12 conjugated with taurine, 9 conjugated with glycine and 16 sulfated (Steraloids, Newport, RI, USA, Sigma-Aldrich, Gillingham, UK and QMX Laboratories, Thaxted, UK) and were analyzed along with the patient samples to determine the chromatographic retention times of BAs and to aid in metabolite identification. Raw data were converted to the open-source format mzML and signals below 100 counts (absolute intensity threshold) were removed using the MSConvert tool in ProteoWizard. Feature extraction was performed by XCMS within R, and data filtering and correction for the elimination of potential run-order effects were performed using the nPYc-Toolbox. Targeted extraction, and integration of annotated BA species were made using the R package peakPantheR. Probabilistic quotient normalization was used to correct for dilution effects. Features below the limit of detection in more than 20% of study samples were discarded. Values were log-transformed and zeros were imputed using impute.QRILC from the imputeLCMD R package. For statistical analyses, features were also mean-centered. 2.7.2. Fecal SCFA/short-chain carboxylic acid profiling A 300-µL aliquot of fecal extract collected in OMNIMET ® tubes was centrifuged for 10 minutes and filtered through 0.45 µm nylon microcentrifuge tubes (Costar, Corning, New York, USA), then stored at −80°C. From this extract, a 5-µL aliquot was transferred into a 1.5-mL microcentrifuge tube (Eppendorf, Germany) and mixed with 45 µL of water. The preparation of the diluted fecal extract sample followed the protocol described by Valdivia-Garcia et al., with modifications made to the mass spectrometer parameters for acetate (cone voltage: 20 V; collision energy: 14 V). All other mass spectrometry and chromatographic parameters remained consistent with the original method. Additional calibrators were added to extend the original method up to 500 µmol/L for all 10 compounds. 2.7.3. Statistical analysis of metabolomic data Statistical analyses were adapted from approaches described previously. , Multivariate analysis of UHPLC–MS BA profiling data was performed on log-transformed and Pareto-scaled data. Orthogonal partial least squares-discriminant analysis (OPLS-DA) models were validated using CV-ANOVA, which provides a significance test of the null hypothesis of equal residuals between the model under validation and a randomly fitted model that uses the same data. S-plots were used to visualize the highly influential discriminatory features, and depict the covariance and the correlation structure between the X -variables and the predictive score t [1] of the model. Features at the far ends of the plot have a very high reliability while having a high model influence due to their high variance in the dataset. Univariate statistics were performed using GraphPad Prism, v10.2.2; the Friedman test was used for BA data to compare pre- with post-vancomycin samples, while the Kruskal–Wallis test was used for SCFA data; in both cases, Dunn’s statistical hypothesis testing was applied to account for multiple comparisons, with 2-tailed statistical tests being used in all cases. 2.8. Ethics statement Ethical approval was obtained for this study from the West Midlands—South Birmingham Research Ethics Committee and HRA and Health and Care Research Wales (HCRW) (21/WM/0197). The study was registered on Clinicaltrials.gov (NCT05376228). We conducted a single-arm interventional study of open-label OV treatment in patients with PSC and active colonic IBD. The overarching goal was to quantify the proportion of patients who attained clinical remission from an IBD perspective, and identify the associated changes in host mucosal biology associated with OV treatment. The study was conducted from February 2022 to November 2022 ( NCT05376228 ). Patients aged ≥18 years with a diagnosis of PSC and concomitant pancolonic IBD were screened for eligibility at a single high-volume center. Inclusion criteria were mild to moderately active colitis based on a partial Mayo colitis score of ≥3 and ≤6 and commitment to participate in scheduled, standard-of-care lower gastrointestinal endoscopy (as part of disease assessment and annual colorectal cancer surveillance). , , Exclusion criteria for study participation were any active infectious cause of diarrhea (including Clostridioides difficile toxin-positive stool); an isolated ileal, right-sided or rectal phenotype of PSC-IBD; use of antibiotics or probiotics in the 3 months prior to screening; presence of stricturing, fistulating, or perianal IBD; a history of small bowel or colonic resection; a change or initiation of corticosteroids in the 2 weeks prior to screening; commencement of an immunomodulator or advanced therapy regimen in the prior 3 months; a history of intolerance to vancomycin; and/or evidence of hepatic decompensation in the 3 months prior to screening. Liver transplant recipients with evidence of recurrent PSC were allowed to participate in the absence of other exclusion criteria, provided inclusion criteria were met. The sample size for this trial was determined empirically, based on available research grant funding. As the study was exploratory in nature, it was not powered to detect specific efficacy endpoints, but rather to evaluate mechanistic effects of OV in PSC-IBD, and the association with specific clinical outcomes. Screening for the study and written consent were obtained from eligible patients up to 2 weeks prior to their scheduled annual surveillance colonoscopy (Week −2). At baseline (Week 0), the participant’s annual surveillance colonoscopy was performed, the total Mayo score was recorded, and up to 8 biopsies from the sigmoid colon were collected for fulfilling translational research objectives. Individuals with evidence of active colitis (endoscopic Mayo score of 1 or 2 and a partial Mayo score of ≥3 and ≤6)were treated with 4 weeks of OV 125 mg 4 times a day (QID), followed by 4 weeks of treatment withdrawal (Week 8) . Following completion of the treatment course (Week 4), a sigmoidoscopy was performed for a reassessment of endoscopic disease activity, and for collection of post-treatment sigmoid colon biopsies. Stool samples were collected at the baseline visit (prior to the patient taking bowel preparation for colonoscopy), Weeks 2, 4, and 8 to study metagenomics and metatranscriptomics (collected in DNAGenotek OMR-200 stool collection kits), short-chain fatty acids (SCFAs) and fecal BA profiling (collected in ME-200 collection kits), in addition to analysis of fecal calprotectin values. Serum was collected at the same timepoints for measurement of liver biochemistry (bilirubin, albumin, ALP, and alanine transaminase [ALT]), alongside clinical characteristics and the partial Mayo colitis score. Patients who experienced a significant increase in their partial Mayo score (defined as a >30% increase from baseline) during the study period, or based on clinician discretion, would exit the study and receive escalation in IBD treatment as per routine standard of care. An unselected, random selection of participants ( n = 6) also provided stool samples for microbiological culture, to determine if OV treatment led to the selection of vancomycin-resistant enterococci (Slanetz and Bartley agar [Oxoid]; treated with 8 µg/ml vancomycin). The primary efficacy outcome was the induction of clinical remission at Week 4, the end of OV treatment, as defined in contemporary clinical trial practice. Specifically, this was by the modified Mayo colitis score being <2, with a stool frequency subscore ≤1, rectal bleeding score (RBS) = 0, and an endoscopic score ≤1. Given the short duration of follow-up, the term herein references induction of ‘short-term’ clinical remission, and is used to reflect on-treatment changes at 4 weeks, the observed improvements, all while acknowledging that longer-term follow-up is required to assess more sustained or durable remission. Key secondary efficacy outcomes were assessed at Weeks 4 and 8, and included changes in fecal calprotectin, the partial Mayo colitis score, the total Mayo colitis score, and serum ALP, ALT, and bilirubin values. 2.3.1. Data presentation and statistical analysis Continuous variables are shown as mean values and SD unless otherwise specified. Categorical variables are presented as raw numbers and percentages. The paired sample Wilcoxon test and analysis of variance (ANOVA) were conducted between different study timepoints to assess changes in fecal calprotectin, Mayo colitis scores and subscores, and in liver biochemistry following the treatment and subsequent withdrawal phases of OV treatment. Differences between groups were considered significant at a value of p < 0.05. Continuous variables are shown as mean values and SD unless otherwise specified. Categorical variables are presented as raw numbers and percentages. The paired sample Wilcoxon test and analysis of variance (ANOVA) were conducted between different study timepoints to assess changes in fecal calprotectin, Mayo colitis scores and subscores, and in liver biochemistry following the treatment and subsequent withdrawal phases of OV treatment. Differences between groups were considered significant at a value of p < 0.05. DNA and RNA were extracted from stool preserved in Omnigene tubes using Spin Column technology via a modified ZymoBIOMICS DNA/RNA Mini Kit protocol. The integrity of the isolated RNA was determined using the Bioanalyzer (Agilent) and only samples with RNA integrity number >7 were used. Libraries from extracted DNA for metagenomics were prepared using NEBNext ® Ultra™ II FS DNA Library Prep Kit following the protocol recommended by the manufacturer. Single index, paired-end shotgun metagenomic sequencing was performed on the NextSeq2000 platform. For metatranscriptomics, the extracted total RNA was first ribo-depleted using the Illumina Ribo-Zero Plus Microbiome rRNA Depletion Kit prior to library preparation with the Illumina Stranded Total RNA prep following the protocol recommended by the manufacturer. Paired-end, dual-indexed run shotgun RNA sequencing was performed on the NextSeq2000 platform. The library preparations and sequencing were done as a single batch with appropriate negative controls to reduce potential confounders. From the 60 stool samples collected at specified timepoints of the study, a total of 1.4 billion reads from shotgun metagenomic sequencing and of 950 million reads from shotgun metatranscriptomic sequencing were generated. Both metagenomic and metatranscriptomic sequence reads were processed with the KneadData v0.5.1 quality control pipeline ( http://huttenhower.sph.harvard.edu/kneaddata ), which uses the Trimmomatic to remove adapter sequences and low-quality bases and Bowtie2 for decontamination to remove host DNA/RNA and ribosomal RNA. In addition, metatranscriptomic reads were filtered against the human transcriptome and the SILVA database. Following filtering and quality control (QC), an average of 18 million paired-end metagenomic reads per sample were processed for assembly and reference-based profiling and an average of 12 million paired-end metatranscriptomic reads per sample were processed for downstream analysis. Bacterial taxonomic annotations from the metagenomic sequences were performed using MetaPhlAn4 which uses clade-specific markers that provide bacterial, archaeal, viral, and eukaryotic quantification at the species level using default parameters for reported relative abundances. Species with relative abundance <0.1% in at least 5 samples were excluded from the study. Functional profiling was derived from metatranscriptomic (active) and metagenomes (potential) using HUMAnN 3.0. For metatranscriptomic mapping the taxonomic profile of the corresponding metagenome was used an additional input to HUMAnN 3.0 via the --taxonomic-profile flag. This guaranteed that RNA reads are mapped to any species’ pangenomes detected in the metagenome rather than using derived species pangenomes from the RNA read. Briefly for both metagenomes and metatranscriptomes the translated search using DIAMOND was mapped against UniRef90 following which hits were counted per gene family and normalized for length and alignment quality. Gene families were regrouped into Kyoto Encyclopaedia of Genes and Genomes (KEGG) orthologs, enzyme classifications, and gene ontologies. Gene family abundances from both the nucleotide and the translated searches are then combined into structured pathways from MetaCyc and sum-normalized to relative abundances. All feature abundances that did not exceed 0.1% of the data with a minimum prevalence of 10% of the total samples were excluded from further analysis. Additionally, the ratio of transcript expression against gene abundance was measured to quantify changes in expression while controlling for gene copy number. Alpha diversity analysis was performed using the Shannon diversity index, and Bray–Curtis dissimilarities between groups were calculated using the Adonis function from the vegan R package in PERMANOVA analyses. Principal Coordinates Analysis (PCoA), non-metric multidimensional scaling (NMDS) analysis dimensionality reduction analysis (species only), and sample cluster analysis were performed. Lastly, Microbiome Multivariable Associations with Linear Models (MaAsLin2) was used to quantify paired differences between the relative abundance of taxa, genes, and pathways between study timepoints and fecal calprotectin levels. Due to the hypothesis-generating and exploratory nature of this study, an FDR (false discovery rate)-corrected p -value threshold of ≤0.1 was used. The Spearman rank correlation coefficient (rho) was used to assess correlations between taxa, genes, fecal BA concentrations, and fecal calprotectin levels. DNA and RNA were extracted from colonic mucosal biopsies taken at baseline and Week 4 of the study using a modified Qiagen AllPrep DNA/RNA Mini Kit protocol that included mechanical lysis and on-column DNAse digestion. The RNA extracted was passed on for host mucosal transcriptomics as described later. The DNA extracted was subjected to 16S rRNA gene amplification and sequencing using the Earth Microbiome Project protocol. Briefly, 16S rRNA genes were amplified in technical duplicates with primers targeting the 16S rRNA V4 region (515F–806R) using a 1-step, single-indexed PCR approach. As with DNA/RNA extraction, 16S rRNA gene PCR was done in a batch with appropriate negative controls. Paired-end sequencing (2 × 250 bp) was performed on the Illumina MiSeq platform (Illumina, San Diego, USA). A total of 7.2 million reads (240 324 reads/sample) and 1450 ASVs were obtained after QC using the Quantitative Insights Into Microbial Ecology 2 (QIIME2) pipeline. Taxonomy was assigned against the Silva-132-99% OTUs database. The functional profiles of microbial communities were inferred using PICRUSt2-derived relative MetaCyc and KEGG pathway analysis. Differences in taxa and inferred functional profiles were assessed using MaAsLin2 as described previously. The RNA extracted from colonic mucosal biopsies using the protocol described earlier was subjected to Ribo-Zero Gold rRNA Removal Kit (Illumina, San Diego, USA) to remove contaminating ribosomal RNA, and SMARTer Stranded RNA-Seq kit (Takara, Japan) was used for library construction. Paired-end 75 bp sequencing was performed using NextSeq 500/550 v2 kit (Illumina, San Diego, USA). After the removal of low-quality and ambiguous reads an average of 15 million reads per sample were processed for downstream analysis. Reads obtained were quality controlled with FastQC and Trimmomatic. , Contaminating ribosomal RNA reads were removed using Bowtie2 and reads were mapped to the human genome sequence database (GRCh38) using STAR and quantified with featureCounts. , , Genes were filtered and differential gene expression was analyzed based (FDR-corrected p ≤ 0.05) using edgeR. Gene ontology and KEGG/Reactome pathway analysis was conducted using Camera for competitive gene set testing. ClueGo was used to functionally group gene ontology and pathway annotation networks. 2.7.1. Bile acid profiling Reversed-phase (RP) ultra-high performance liquid chromatography–mass spectrometry (UHPLC–MS) was performed on OmniMet stool-preservative mixture for BA profiling. The OmniMet Gut ME-200 tubes were shaken for 1 minute and centrifuged for 20 minutes at 3214 g , 4°C. The supernatant (500 µL) was transferred to a micro-centrifuge filter (0.22 µm, nylon, Costar) and centrifuged for 20 minutes at 16 000 g , 4°C. The samples were then prepared as previously described. Ultra-high performance liquid chromatography–mass spectrometry analyses were performed using an ACQUITY UPLC (Waters Corp., Milford, MA, USA) coupled to a Xevo G2-S TOF mass spectrometer (Waters Corp., Manchester, UK) via a Z-pray electrospray ionization (ESI) source. Chromatographic separation was conducted in an ACQUITY BEH C8 column (1.7 µm, 100 mm × 2.1 mm), thermostated at 60°C. The mobile phases consisted of ACN:H 2 O 1:10 (v/v) with 1 mM ammonium acetate and pH 4.15 adjusted with acetic acid (A) and ACN:IPA (acetonitrile:isopropyl alcohol) 1:1 (v/v) (B) with a flow rate of 0.6 ml/min. The chromatographic gradient elution program was conducted as previously described. , Two µl injections of prepared samples were made to the system. Mass spectrometry was performed in negative ESI mode using the following parameters: capillary voltage 1.5 kV, cone voltage 60 V, source temperature 150°C, desolvation temperature 600°C, desolvation gas flow 1000 L/h, and cone gas flow 150 L/h. To ensure mass accuracy, a lock-spray interface was used, with leucine enkephalin { m / z 554.2615 ([M−H] − )} solution used as the lock mass at a concentration of 200 ng/µl and a flow rate of 15 µL/min. Quality control samples were prepared by pooling equal volumes of the fecal filtrates. Quality control samples were used as an assay performance monitor and as a proxy to remove features with high variation. Quality control samples were also spiked with mixtures of BA standards (81 BA standards including 44 non-conjugated, 12 conjugated with taurine, 9 conjugated with glycine and 16 sulfated (Steraloids, Newport, RI, USA, Sigma-Aldrich, Gillingham, UK and QMX Laboratories, Thaxted, UK) and were analyzed along with the patient samples to determine the chromatographic retention times of BAs and to aid in metabolite identification. Raw data were converted to the open-source format mzML and signals below 100 counts (absolute intensity threshold) were removed using the MSConvert tool in ProteoWizard. Feature extraction was performed by XCMS within R, and data filtering and correction for the elimination of potential run-order effects were performed using the nPYc-Toolbox. Targeted extraction, and integration of annotated BA species were made using the R package peakPantheR. Probabilistic quotient normalization was used to correct for dilution effects. Features below the limit of detection in more than 20% of study samples were discarded. Values were log-transformed and zeros were imputed using impute.QRILC from the imputeLCMD R package. For statistical analyses, features were also mean-centered. 2.7.2. Fecal SCFA/short-chain carboxylic acid profiling A 300-µL aliquot of fecal extract collected in OMNIMET ® tubes was centrifuged for 10 minutes and filtered through 0.45 µm nylon microcentrifuge tubes (Costar, Corning, New York, USA), then stored at −80°C. From this extract, a 5-µL aliquot was transferred into a 1.5-mL microcentrifuge tube (Eppendorf, Germany) and mixed with 45 µL of water. The preparation of the diluted fecal extract sample followed the protocol described by Valdivia-Garcia et al., with modifications made to the mass spectrometer parameters for acetate (cone voltage: 20 V; collision energy: 14 V). All other mass spectrometry and chromatographic parameters remained consistent with the original method. Additional calibrators were added to extend the original method up to 500 µmol/L for all 10 compounds. 2.7.3. Statistical analysis of metabolomic data Statistical analyses were adapted from approaches described previously. , Multivariate analysis of UHPLC–MS BA profiling data was performed on log-transformed and Pareto-scaled data. Orthogonal partial least squares-discriminant analysis (OPLS-DA) models were validated using CV-ANOVA, which provides a significance test of the null hypothesis of equal residuals between the model under validation and a randomly fitted model that uses the same data. S-plots were used to visualize the highly influential discriminatory features, and depict the covariance and the correlation structure between the X -variables and the predictive score t [1] of the model. Features at the far ends of the plot have a very high reliability while having a high model influence due to their high variance in the dataset. Univariate statistics were performed using GraphPad Prism, v10.2.2; the Friedman test was used for BA data to compare pre- with post-vancomycin samples, while the Kruskal–Wallis test was used for SCFA data; in both cases, Dunn’s statistical hypothesis testing was applied to account for multiple comparisons, with 2-tailed statistical tests being used in all cases. Reversed-phase (RP) ultra-high performance liquid chromatography–mass spectrometry (UHPLC–MS) was performed on OmniMet stool-preservative mixture for BA profiling. The OmniMet Gut ME-200 tubes were shaken for 1 minute and centrifuged for 20 minutes at 3214 g , 4°C. The supernatant (500 µL) was transferred to a micro-centrifuge filter (0.22 µm, nylon, Costar) and centrifuged for 20 minutes at 16 000 g , 4°C. The samples were then prepared as previously described. Ultra-high performance liquid chromatography–mass spectrometry analyses were performed using an ACQUITY UPLC (Waters Corp., Milford, MA, USA) coupled to a Xevo G2-S TOF mass spectrometer (Waters Corp., Manchester, UK) via a Z-pray electrospray ionization (ESI) source. Chromatographic separation was conducted in an ACQUITY BEH C8 column (1.7 µm, 100 mm × 2.1 mm), thermostated at 60°C. The mobile phases consisted of ACN:H 2 O 1:10 (v/v) with 1 mM ammonium acetate and pH 4.15 adjusted with acetic acid (A) and ACN:IPA (acetonitrile:isopropyl alcohol) 1:1 (v/v) (B) with a flow rate of 0.6 ml/min. The chromatographic gradient elution program was conducted as previously described. , Two µl injections of prepared samples were made to the system. Mass spectrometry was performed in negative ESI mode using the following parameters: capillary voltage 1.5 kV, cone voltage 60 V, source temperature 150°C, desolvation temperature 600°C, desolvation gas flow 1000 L/h, and cone gas flow 150 L/h. To ensure mass accuracy, a lock-spray interface was used, with leucine enkephalin { m / z 554.2615 ([M−H] − )} solution used as the lock mass at a concentration of 200 ng/µl and a flow rate of 15 µL/min. Quality control samples were prepared by pooling equal volumes of the fecal filtrates. Quality control samples were used as an assay performance monitor and as a proxy to remove features with high variation. Quality control samples were also spiked with mixtures of BA standards (81 BA standards including 44 non-conjugated, 12 conjugated with taurine, 9 conjugated with glycine and 16 sulfated (Steraloids, Newport, RI, USA, Sigma-Aldrich, Gillingham, UK and QMX Laboratories, Thaxted, UK) and were analyzed along with the patient samples to determine the chromatographic retention times of BAs and to aid in metabolite identification. Raw data were converted to the open-source format mzML and signals below 100 counts (absolute intensity threshold) were removed using the MSConvert tool in ProteoWizard. Feature extraction was performed by XCMS within R, and data filtering and correction for the elimination of potential run-order effects were performed using the nPYc-Toolbox. Targeted extraction, and integration of annotated BA species were made using the R package peakPantheR. Probabilistic quotient normalization was used to correct for dilution effects. Features below the limit of detection in more than 20% of study samples were discarded. Values were log-transformed and zeros were imputed using impute.QRILC from the imputeLCMD R package. For statistical analyses, features were also mean-centered. A 300-µL aliquot of fecal extract collected in OMNIMET ® tubes was centrifuged for 10 minutes and filtered through 0.45 µm nylon microcentrifuge tubes (Costar, Corning, New York, USA), then stored at −80°C. From this extract, a 5-µL aliquot was transferred into a 1.5-mL microcentrifuge tube (Eppendorf, Germany) and mixed with 45 µL of water. The preparation of the diluted fecal extract sample followed the protocol described by Valdivia-Garcia et al., with modifications made to the mass spectrometer parameters for acetate (cone voltage: 20 V; collision energy: 14 V). All other mass spectrometry and chromatographic parameters remained consistent with the original method. Additional calibrators were added to extend the original method up to 500 µmol/L for all 10 compounds. Statistical analyses were adapted from approaches described previously. , Multivariate analysis of UHPLC–MS BA profiling data was performed on log-transformed and Pareto-scaled data. Orthogonal partial least squares-discriminant analysis (OPLS-DA) models were validated using CV-ANOVA, which provides a significance test of the null hypothesis of equal residuals between the model under validation and a randomly fitted model that uses the same data. S-plots were used to visualize the highly influential discriminatory features, and depict the covariance and the correlation structure between the X -variables and the predictive score t [1] of the model. Features at the far ends of the plot have a very high reliability while having a high model influence due to their high variance in the dataset. Univariate statistics were performed using GraphPad Prism, v10.2.2; the Friedman test was used for BA data to compare pre- with post-vancomycin samples, while the Kruskal–Wallis test was used for SCFA data; in both cases, Dunn’s statistical hypothesis testing was applied to account for multiple comparisons, with 2-tailed statistical tests being used in all cases. Ethical approval was obtained for this study from the West Midlands—South Birmingham Research Ethics Committee and HRA and Health and Care Research Wales (HCRW) (21/WM/0197). The study was registered on Clinicaltrials.gov (NCT05376228). 3.1. Demographics of enrolled participants We recruited individuals with a known history of PSC-IBD who had pan-colitis, without prior evidence of small bowel disease, stricturing/penetrating disease, or transmural involvement typically seen in Crohn’s disease. This approach ensured that the study specifically targeted a well-defined subgroup with colonic disease, in keeping with the classical distribution of IBD seen in PSC. , Eighteen patients were formally screened for study participation, of whom 3 were excluded. This was due to positive stool microscopy and culture ( n = 2) and a new diagnosis of sigmoid cancer found on baseline colonoscopy ( n = 1). In all, 15 patients participated in the study (median age 33 years, 11 attributed male sex at birth). All participants had an isolated colonic disease phenotype, 6 were naive to advanced IBD therapies, while 7 were on advanced IBD therapies at the time of recruitment . All patients had clinically and endoscopically active IBD at baseline with a median fecal calprotectin of 459 µg/g (IQR 287, 1049) and a median total Mayo score of 5 (SD 1). Four participants had undergone a liver transplant previously (all with radiological evidence of recurrent PSC). Median ALP values were 274 IU/L (IQR 157, 479) at baseline. Among non-transplant patients, the median baseline transient elastography value was 7.3 kPa (IQR 4.3-11.1 kPa). 3.2. Oral vancomycin treatment is associated with improvements in colitis activity Clinical remission at 4 weeks was achieved by 12 patients (80%) following OV therapy. A significant decrease in fecal calprotectin (mean Δ −672 µg/g; CI 337-1008; p < 0.001), partial Mayo scores (mean Δ −3; CI 2.2-3.8; p < 0.001); and endoscopic Mayo score (mean Δ −1.4; CI 1-1.8; p < 0.001); was observed at Week 4 in comparison to baseline , including on subgroup analysis of patients without prior history of liver transplantation , and excluding those taking ursodeoxycholic acid . Notably, all patients achieved mucosal healing at Week 4 (defined by Mayo endoscopy subscore of 0 or 1). At Week 8 (following 4 weeks withdrawal of OV) there was a significant increase in both fecal calprotectin (mean Δ 223 µg/g; CI 41.6-404; p < 0.05) and partial Mayo scores (mean Δ 1.3; CI 0-2.5; p < 0.05) compared with Week 4, although this remained lower compared with baseline readings. No adverse events or tolerability concerns were observed or reported during the 4-week treatment period with OV. Stool culture in an unselected sample of participants ( n = 6/15) did not show evidence for the selection of vancomycin-resistant enterococci. 3.3. Oral vancomycin treatment is associated with changes in serum liver biochemistry A reduction in serum ALP and ALT values was observed from baseline to Week 4 (mean Δ −105 IU/L; CI 25.4-185; p < 0.05 and mean Δ −19.3 IU/L; CI 2.3-36.3; p < 0.001, respectively). At Week 8, following withdrawal of OV, a nonsignificant trend toward an increase in serum ALT and ALP values was observed compared with Week 4 . No change in total serum bilirubin was noted at any timepoint. 3.4. Changes in gut bacterial composition associated with OV treatment A reduction in alpha diversity in stool was observed as early as Week 2 of OV treatment compared with baseline (Δ −1.2, p < 0.001; , ). At the end of treatment (Week 4), the alpha diversity index remained low compared with the baseline (Δ −1.2, p < 0.001). On treatment withdrawal, diversity increased by Week 8 (Δ 0.6, p < 0.001), but remained lower than baseline readout. Beta diversity derived from Bray–Curtis analysis of fecal metagenomes showed significant differences between the microbial composition at different treatment timepoints ( p = 0.001; , ). Bacterial composition clustered separately during the OV treatment phase at Weeks 2 and 4 compared with baseline (both FDR adjusted p < 0.01). The clustering at Weeks 2 and 4 were similar to one another. Composition following vancomycin withdrawal (Week 8) clustered separately to that during treatment (both FDR adjusted p < 0.01). This clustering at Week 8 returned closer to baseline values but remained significantly different (FDR adjusted p < 0.01). At the phylum level, a reduction in the relative abundance of Bacterioidetes and Firmicutes , along with an increase in Proteobacteria , Fusobacteria , and Verrucomicrobia , was observed during OV treatment compared with baseline . Marked differences were observed in 68 species (FDR adjusted p < 0.1) following differential abundance analysis at species level at Week 4 compared with baseline . Notably, at Week 4 a significant decrease in Faecalibacterium prausnitzii , Anaerostipes hadrus , Bifidobacterium longum , and multiple SCFA producing species belonging to the Clostridium , Ruminococcus , Lachnospira , and Roseburia genera was observed when compared with baseline. Conversely, a significant increase in Fusobacterium nucleatum , Enterobacter hormaechei , Escherichia coli , and multiple species belonging to the Veillonella and Klebsiella genera were increased at Week 4 compared with baseline. Of note, Akkermansia muciniphila appeared to increase following treatment with OV. The relative abundance of the aforementioned phylae returned closer to baseline by Week 8 following vancomycin withdrawal. Importantly, the composition of 9 bacterial species at Week 8 showed differences when compared with Week 4 (FDR adjusted p < 0.1). Specifically, an increase in Roseburia hominis , Prevotella buccae , Lachnospiraceae bacterium , Blautia hansenii , and Bacteroides thetaiotaomicron ; alongside a decrease in taxa belonging to the Proteobacteria phylum was observed, suggesting a shift back toward baseline composition. Comparing the composition of gut bacterial species in relation to fecal calprotectin in all patients at both Week 4 to baseline, and Weeks 8 to 4, revealed that fecal calprotectin levels were associated with a decrease in multiple species including F. prausnitzii , Fusicatenibacter saccharivorans , and those belonging to the Clostridium , Ruminococcus , Lachnospira , and Roseburia genera (FDR adjusted p < 0.1). Reciprocally, a decrease in fecal calprotectin was associated with an increase in F. nucleatum and species belonging to the Proteobacteria phylum (FDR adjusted p < 0.1). These observations were confirmed with correlation analysis at all available timepoints between these species and fecal calprotectin values . 3.5. Significant shifts in gut microbial function following OV therapy Comparative metatranscriptomic analysis showed significant differences at both gene expression and pathway levels, during OV treatment and the withdrawal phase at different timepoints ( p = 0.001; , ). The metatranscriptome during OV therapy at Weeks 2 and 4 clustered together on PCoA but were significantly different when compared with baseline (both p < 0.01). Pathway analysis revealed that OV therapy was associated with significant changes in 159 MetaCyc pathways and 259 KEGG pathways (FDR adjusted p < 0.1; , ). The pathways that were increased included those implicated in the mannitol cycle, alanine biosynthesis, dodecenoate biosynthesis, lipid IVA biosynthesis, and fatty acids biosynthesis. Metatranscriptomic pathways that were decreased included metabolic processes involved in butyrate and propanoate production, mannan and rhamnose degradation, and bacterial chemotaxis. Similarly, KEGG orthology and enzyme classification analysis revealed significantly different expression patterns of 2011 genes (1276 increased and 735 decreased) and 827 genes (530 increased and 297 decreased), respectively, at Week 4 compared with baseline. Notably, shifts included a reduction of expression in genes associated with conversion of primary to secondary BAs (choloylglycine hydrolase, 3-oxo-5-beta-steroid 4-dehydrogenase), enzymes involved in butanoate metabolism (3-hydroxybutyryl CoA dehydrogenase, butyrate kinase, fucose transmembrane transporter activity) and an increase in expression of primary amine oxidase and enzymes involved in lipopolysaccharide production (lipopolysaccharide glucosyltransferase, lipopolysaccharide galactosyltransferase). Metatranscriptome clustering at Week 8 (4 weeks of OV withdrawal), overlapped with baseline clustering suggesting restoration of pre-vancomycin gene expression. Comparative metatranscriptomic analysis of pathways, KEGG orthology, and enzyme classifications at Week 8 in relation to Week 4 demonstrated a reversal of these shifts in the majority of involved metabolic processes. Correlation analysis between selected microbial gene expression datasets and fecal calprotectin revealed a positive correlation between BA metabolism (choloylglycine hydrolase, 3-oxo-5-beta-steroid 4-dehydrogenase), butyrate kinase and primary amine oxidases. Linking bacterial compositional analysis with metatranscriptomic datasets identified that these metabolic pathway changes following OV therapy were a result of increased gene expression from specific microbial species including E. coli , F. nucleatum species belonging to the Klebsiella and Veillonella genera, and a decrease in gene expression from species that included A. hadrus , F. prausnitzii and species belonging to the Roseburia , Bacteroides , and Blautia genera. These are likely to represent changes in the abundance of these species following OV treatment rather than a true inherent change in gene expression. 3.6. Colonic mucosal metataxonomics reveals similar changes in mucosally adherent bacterial diversity and composition following OV Next, we proceeded to analyze the composition of mucosa-adherent gut bacteria. Alpha diversity analysis demonstrated a significant reduction in mucosal bacterial diversity following 4 weeks of OV treatment (Δ −1.3, p < 0.001; , ), mirroring findings from the stool. Beta diversity analysis (Bray–Curtis) confirmed that the mucosal bacteria clustered significantly differently between the baseline and Week 4 samples ( p < 0.001; , ). Differential abundance analysis of mucosa-adherent bacterial composition showed a significant reduction in genera belonging to the families Lachnospiraceae ( Blautia , Ruminococcus , Faecalibacterium , Roseburia ), Bacteroidaceae , Butyricicoccaceae , and the Clostridia class following OV (FDR p < 0.1; , ). In contrast, an increase in genera belong to the families Fusobacteriaceae , Enterobacteriaceae , Veillonellaceae , Clostridia vadin , and Akkermansiaceae was observed at Week 4 compared with baseline (FDR p < 0.1). A significant positive correlation was observed between fecal calprotectin values and Anaerostipes , Lachnospiraceae , and Blautia , and negative correlation with Fusobacterium and Enterobacteriaceae. Metacyc pathway analysis of microbiota functions inferred from 16S rRNA gene sequence profiles revealed a decreased expression of genes within multiple pathways that included mannan and chondroitin sulfate degradation, sulfur oxidation, and pyruvate fermentation to butanoate following treatment with OV. In contrast, pathways that were increased include phenylacetate degradation, fatty acid beta-oxidation, and enterobactin biosynthesis. Kyoto Encyclopaedia of Genes and Genomes orthology analysis of predictive mucosal metagenomic pathways revealed that vancomycin treatment resulted in a significant reduction in the genes that include choloylglycine hydrolase, choline-sulfatase and enrichment of genes that include aromatic-amino-acid transaminase and peptidoglycan lipid II flippase. 3.7. Oral vancomycin is associated with marked alterations in the profiles of a diverse range of gut BA profiles and selected SCFAs Stool BA profiling was first analyzed via multivariate analysis. A principal components analysis score plot of log-transformed, Pareto-scaled, stool BA profiling demonstrated clustering according to a week of collection of samples from patients . Specifically, stool samples collected prior to OV initiation (Week 0/baseline) and following withdrawal (Week 8) clustered together. Similarly, samples collected at Weeks 2 and 4 during OV treatment also clustered together, but separately from the baseline and Week 8 samples. This was consistent with OV causing sustained changes in gut BA profile during its use, but rapid recovery back toward baseline profile after completion of its use. Orthogonal partial least squares-discriminant analysis demonstrated strong models for the separation of stool BA profiles between baseline and Week 2 (R2X = 0.4, R2Y = 0.736, Q2 = 0.496, CV-ANOVA: p = 0.0010; , ), as well as baseline and Week 4 (R2X = 0.464, R2Y = 0.731, Q2 = 0.483, CV-ANOVA: p = 0.0018), but not between baseline and Week 8. S-plots were used to interrogate the specific BA species that changed in association with vancomycin use in the OPLS-DA models ; these were noteworthy for showing loss of secondary BAs (including derivatives of deoxycholic acid [DCA] and lithocholic acid [LCA]) in association with OV treatment, and enrichment of primary BAs (particularly glycoconjugates, including glycocholic acid). Univariate analysis of stool BA profiling supported these findings . Stool secondary BAs (including DCA and LCA)—together with their microbially metabolized derivatives, including isoDCA and isoLCA—were found at significantly reduced stool levels at Weeks 2 and 4 compared with baseline (adjusted p -value <0.01, Friedman’s test with Dunn’s statistical hypothesis testing). By Week 8 (after vancomycin washout), all secondary BAs showed an overall pattern of recovery toward baseline levels, although levels of stool DCA and LCA were still significantly reduced at this point compared with baseline (adjusted p < 0.05), with wide inter-individual variability regarding degree of post-vancomycin recovery . Conversely, a range of primary BAs—including both unconjugated and glycol-conjugated variants of cholic acid and chenodeoxycholic acid—demonstrated significantly increased levels within stool at Weeks 2 and 4 during vancomycin treatment (adjusted p < 0.05), before recovery back to baseline levels by Week 8/after vancomycin washout . Univariate analysis was also performed for stool SCFA profiles, as well as a number of related short-chain carboxylic acids . Oral vancomycin use was associated with significantly reduced fecal levels of 2 SCFAs, butyrate, and valerate, at Weeks 2 and 4 compared with baseline (adjusted p- value <0.01, Kruskal–Wallis test with Dunn’s statistical hypothesis testing; ), but with recovery toward baseline levels by Week 8; no significant changes in levels of other stool SCFAs were observed associated with OV use. Additionally, OV use was associated with a significant increase in stool 2-hydroxybutyrate at Weeks 2 and 4 (adjusted p- value <0.01; ), which again recovered back to baseline level by Week 8. 3.8. Host transcriptomics reveals significant changes in mucosal gene expression profiles following OV treatment Pairwise dissimilarity analysis of the mucosal transcriptome performed by NMDS revealed clear separation of clustering of patients at baseline and at Week 4 following treatment with OV ( p = 0.004; ). Heatmap analysis of differentially expressed genes revealed clear clusters of downregulated and upregulated genes that appear to correlate with fecal calprotectin levels following OV therapy. Differential gene expression analysis demonstrated a significant change in the expression of 843 genes with an increase in 629 genes and a decrease in 214 genes (FDR p < 0.1). Notable genes that had a decreased expression included those associated with immune-mediated inflammatory responses such as TNF (tumour necrosis factor) receptor superfamily ( TNFRSF6B , TNFSF10 ) and interleukin 1 receptor antagonist ( IL1RN ), cellular apoptosis such as caspase ( CASP5 , CASP10 ), tumor suppressor genes such as phospholipase and acyltransferase ( PLAAT2 ), antimicrobial responses ( RSAD2 , REG1A ), aquaporin ( AQP8 ) and calprotectin ( S100A9 ). Notable genes that demonstrated an increased expression included those involved in prostaglandin biosynthesis ( PTGS2 ), extracellular matrix ( HAPLN1 ), dendritic cell development ( FLT3LG ), lymphocyte trafficking ( ITGA5 , ITGA9 ) and copper-containing amine oxidase ( AOC3 ). Heatmap analysis of the differentially expressed genes following OV treatment showed distinct mucosal gene clusters whose expression correlated with fecal calprotectin levels. Gene ontology pathway analysis revealed significant downregulation of pathways that included antimicrobial humoral response, defence response, immune response, regulation of peptidase activity, oxoacid metabolic process, and bile salt transport (FDR p < 0.1). Conversely, pathways that were significantly increased included extracellular matrix organization, cell surface receptor signaling pathway, response to wounding, signal transduction, and regulation of cell adhesion. Additionally, KEGG pathway analysis identified the downregulation of pathways involved in butanoate and tryptophan metabolism following treatment with OV. We recruited individuals with a known history of PSC-IBD who had pan-colitis, without prior evidence of small bowel disease, stricturing/penetrating disease, or transmural involvement typically seen in Crohn’s disease. This approach ensured that the study specifically targeted a well-defined subgroup with colonic disease, in keeping with the classical distribution of IBD seen in PSC. , Eighteen patients were formally screened for study participation, of whom 3 were excluded. This was due to positive stool microscopy and culture ( n = 2) and a new diagnosis of sigmoid cancer found on baseline colonoscopy ( n = 1). In all, 15 patients participated in the study (median age 33 years, 11 attributed male sex at birth). All participants had an isolated colonic disease phenotype, 6 were naive to advanced IBD therapies, while 7 were on advanced IBD therapies at the time of recruitment . All patients had clinically and endoscopically active IBD at baseline with a median fecal calprotectin of 459 µg/g (IQR 287, 1049) and a median total Mayo score of 5 (SD 1). Four participants had undergone a liver transplant previously (all with radiological evidence of recurrent PSC). Median ALP values were 274 IU/L (IQR 157, 479) at baseline. Among non-transplant patients, the median baseline transient elastography value was 7.3 kPa (IQR 4.3-11.1 kPa). Clinical remission at 4 weeks was achieved by 12 patients (80%) following OV therapy. A significant decrease in fecal calprotectin (mean Δ −672 µg/g; CI 337-1008; p < 0.001), partial Mayo scores (mean Δ −3; CI 2.2-3.8; p < 0.001); and endoscopic Mayo score (mean Δ −1.4; CI 1-1.8; p < 0.001); was observed at Week 4 in comparison to baseline , including on subgroup analysis of patients without prior history of liver transplantation , and excluding those taking ursodeoxycholic acid . Notably, all patients achieved mucosal healing at Week 4 (defined by Mayo endoscopy subscore of 0 or 1). At Week 8 (following 4 weeks withdrawal of OV) there was a significant increase in both fecal calprotectin (mean Δ 223 µg/g; CI 41.6-404; p < 0.05) and partial Mayo scores (mean Δ 1.3; CI 0-2.5; p < 0.05) compared with Week 4, although this remained lower compared with baseline readings. No adverse events or tolerability concerns were observed or reported during the 4-week treatment period with OV. Stool culture in an unselected sample of participants ( n = 6/15) did not show evidence for the selection of vancomycin-resistant enterococci. A reduction in serum ALP and ALT values was observed from baseline to Week 4 (mean Δ −105 IU/L; CI 25.4-185; p < 0.05 and mean Δ −19.3 IU/L; CI 2.3-36.3; p < 0.001, respectively). At Week 8, following withdrawal of OV, a nonsignificant trend toward an increase in serum ALT and ALP values was observed compared with Week 4 . No change in total serum bilirubin was noted at any timepoint. A reduction in alpha diversity in stool was observed as early as Week 2 of OV treatment compared with baseline (Δ −1.2, p < 0.001; , ). At the end of treatment (Week 4), the alpha diversity index remained low compared with the baseline (Δ −1.2, p < 0.001). On treatment withdrawal, diversity increased by Week 8 (Δ 0.6, p < 0.001), but remained lower than baseline readout. Beta diversity derived from Bray–Curtis analysis of fecal metagenomes showed significant differences between the microbial composition at different treatment timepoints ( p = 0.001; , ). Bacterial composition clustered separately during the OV treatment phase at Weeks 2 and 4 compared with baseline (both FDR adjusted p < 0.01). The clustering at Weeks 2 and 4 were similar to one another. Composition following vancomycin withdrawal (Week 8) clustered separately to that during treatment (both FDR adjusted p < 0.01). This clustering at Week 8 returned closer to baseline values but remained significantly different (FDR adjusted p < 0.01). At the phylum level, a reduction in the relative abundance of Bacterioidetes and Firmicutes , along with an increase in Proteobacteria , Fusobacteria , and Verrucomicrobia , was observed during OV treatment compared with baseline . Marked differences were observed in 68 species (FDR adjusted p < 0.1) following differential abundance analysis at species level at Week 4 compared with baseline . Notably, at Week 4 a significant decrease in Faecalibacterium prausnitzii , Anaerostipes hadrus , Bifidobacterium longum , and multiple SCFA producing species belonging to the Clostridium , Ruminococcus , Lachnospira , and Roseburia genera was observed when compared with baseline. Conversely, a significant increase in Fusobacterium nucleatum , Enterobacter hormaechei , Escherichia coli , and multiple species belonging to the Veillonella and Klebsiella genera were increased at Week 4 compared with baseline. Of note, Akkermansia muciniphila appeared to increase following treatment with OV. The relative abundance of the aforementioned phylae returned closer to baseline by Week 8 following vancomycin withdrawal. Importantly, the composition of 9 bacterial species at Week 8 showed differences when compared with Week 4 (FDR adjusted p < 0.1). Specifically, an increase in Roseburia hominis , Prevotella buccae , Lachnospiraceae bacterium , Blautia hansenii , and Bacteroides thetaiotaomicron ; alongside a decrease in taxa belonging to the Proteobacteria phylum was observed, suggesting a shift back toward baseline composition. Comparing the composition of gut bacterial species in relation to fecal calprotectin in all patients at both Week 4 to baseline, and Weeks 8 to 4, revealed that fecal calprotectin levels were associated with a decrease in multiple species including F. prausnitzii , Fusicatenibacter saccharivorans , and those belonging to the Clostridium , Ruminococcus , Lachnospira , and Roseburia genera (FDR adjusted p < 0.1). Reciprocally, a decrease in fecal calprotectin was associated with an increase in F. nucleatum and species belonging to the Proteobacteria phylum (FDR adjusted p < 0.1). These observations were confirmed with correlation analysis at all available timepoints between these species and fecal calprotectin values . Comparative metatranscriptomic analysis showed significant differences at both gene expression and pathway levels, during OV treatment and the withdrawal phase at different timepoints ( p = 0.001; , ). The metatranscriptome during OV therapy at Weeks 2 and 4 clustered together on PCoA but were significantly different when compared with baseline (both p < 0.01). Pathway analysis revealed that OV therapy was associated with significant changes in 159 MetaCyc pathways and 259 KEGG pathways (FDR adjusted p < 0.1; , ). The pathways that were increased included those implicated in the mannitol cycle, alanine biosynthesis, dodecenoate biosynthesis, lipid IVA biosynthesis, and fatty acids biosynthesis. Metatranscriptomic pathways that were decreased included metabolic processes involved in butyrate and propanoate production, mannan and rhamnose degradation, and bacterial chemotaxis. Similarly, KEGG orthology and enzyme classification analysis revealed significantly different expression patterns of 2011 genes (1276 increased and 735 decreased) and 827 genes (530 increased and 297 decreased), respectively, at Week 4 compared with baseline. Notably, shifts included a reduction of expression in genes associated with conversion of primary to secondary BAs (choloylglycine hydrolase, 3-oxo-5-beta-steroid 4-dehydrogenase), enzymes involved in butanoate metabolism (3-hydroxybutyryl CoA dehydrogenase, butyrate kinase, fucose transmembrane transporter activity) and an increase in expression of primary amine oxidase and enzymes involved in lipopolysaccharide production (lipopolysaccharide glucosyltransferase, lipopolysaccharide galactosyltransferase). Metatranscriptome clustering at Week 8 (4 weeks of OV withdrawal), overlapped with baseline clustering suggesting restoration of pre-vancomycin gene expression. Comparative metatranscriptomic analysis of pathways, KEGG orthology, and enzyme classifications at Week 8 in relation to Week 4 demonstrated a reversal of these shifts in the majority of involved metabolic processes. Correlation analysis between selected microbial gene expression datasets and fecal calprotectin revealed a positive correlation between BA metabolism (choloylglycine hydrolase, 3-oxo-5-beta-steroid 4-dehydrogenase), butyrate kinase and primary amine oxidases. Linking bacterial compositional analysis with metatranscriptomic datasets identified that these metabolic pathway changes following OV therapy were a result of increased gene expression from specific microbial species including E. coli , F. nucleatum species belonging to the Klebsiella and Veillonella genera, and a decrease in gene expression from species that included A. hadrus , F. prausnitzii and species belonging to the Roseburia , Bacteroides , and Blautia genera. These are likely to represent changes in the abundance of these species following OV treatment rather than a true inherent change in gene expression. Next, we proceeded to analyze the composition of mucosa-adherent gut bacteria. Alpha diversity analysis demonstrated a significant reduction in mucosal bacterial diversity following 4 weeks of OV treatment (Δ −1.3, p < 0.001; , ), mirroring findings from the stool. Beta diversity analysis (Bray–Curtis) confirmed that the mucosal bacteria clustered significantly differently between the baseline and Week 4 samples ( p < 0.001; , ). Differential abundance analysis of mucosa-adherent bacterial composition showed a significant reduction in genera belonging to the families Lachnospiraceae ( Blautia , Ruminococcus , Faecalibacterium , Roseburia ), Bacteroidaceae , Butyricicoccaceae , and the Clostridia class following OV (FDR p < 0.1; , ). In contrast, an increase in genera belong to the families Fusobacteriaceae , Enterobacteriaceae , Veillonellaceae , Clostridia vadin , and Akkermansiaceae was observed at Week 4 compared with baseline (FDR p < 0.1). A significant positive correlation was observed between fecal calprotectin values and Anaerostipes , Lachnospiraceae , and Blautia , and negative correlation with Fusobacterium and Enterobacteriaceae. Metacyc pathway analysis of microbiota functions inferred from 16S rRNA gene sequence profiles revealed a decreased expression of genes within multiple pathways that included mannan and chondroitin sulfate degradation, sulfur oxidation, and pyruvate fermentation to butanoate following treatment with OV. In contrast, pathways that were increased include phenylacetate degradation, fatty acid beta-oxidation, and enterobactin biosynthesis. Kyoto Encyclopaedia of Genes and Genomes orthology analysis of predictive mucosal metagenomic pathways revealed that vancomycin treatment resulted in a significant reduction in the genes that include choloylglycine hydrolase, choline-sulfatase and enrichment of genes that include aromatic-amino-acid transaminase and peptidoglycan lipid II flippase. Stool BA profiling was first analyzed via multivariate analysis. A principal components analysis score plot of log-transformed, Pareto-scaled, stool BA profiling demonstrated clustering according to a week of collection of samples from patients . Specifically, stool samples collected prior to OV initiation (Week 0/baseline) and following withdrawal (Week 8) clustered together. Similarly, samples collected at Weeks 2 and 4 during OV treatment also clustered together, but separately from the baseline and Week 8 samples. This was consistent with OV causing sustained changes in gut BA profile during its use, but rapid recovery back toward baseline profile after completion of its use. Orthogonal partial least squares-discriminant analysis demonstrated strong models for the separation of stool BA profiles between baseline and Week 2 (R2X = 0.4, R2Y = 0.736, Q2 = 0.496, CV-ANOVA: p = 0.0010; , ), as well as baseline and Week 4 (R2X = 0.464, R2Y = 0.731, Q2 = 0.483, CV-ANOVA: p = 0.0018), but not between baseline and Week 8. S-plots were used to interrogate the specific BA species that changed in association with vancomycin use in the OPLS-DA models ; these were noteworthy for showing loss of secondary BAs (including derivatives of deoxycholic acid [DCA] and lithocholic acid [LCA]) in association with OV treatment, and enrichment of primary BAs (particularly glycoconjugates, including glycocholic acid). Univariate analysis of stool BA profiling supported these findings . Stool secondary BAs (including DCA and LCA)—together with their microbially metabolized derivatives, including isoDCA and isoLCA—were found at significantly reduced stool levels at Weeks 2 and 4 compared with baseline (adjusted p -value <0.01, Friedman’s test with Dunn’s statistical hypothesis testing). By Week 8 (after vancomycin washout), all secondary BAs showed an overall pattern of recovery toward baseline levels, although levels of stool DCA and LCA were still significantly reduced at this point compared with baseline (adjusted p < 0.05), with wide inter-individual variability regarding degree of post-vancomycin recovery . Conversely, a range of primary BAs—including both unconjugated and glycol-conjugated variants of cholic acid and chenodeoxycholic acid—demonstrated significantly increased levels within stool at Weeks 2 and 4 during vancomycin treatment (adjusted p < 0.05), before recovery back to baseline levels by Week 8/after vancomycin washout . Univariate analysis was also performed for stool SCFA profiles, as well as a number of related short-chain carboxylic acids . Oral vancomycin use was associated with significantly reduced fecal levels of 2 SCFAs, butyrate, and valerate, at Weeks 2 and 4 compared with baseline (adjusted p- value <0.01, Kruskal–Wallis test with Dunn’s statistical hypothesis testing; ), but with recovery toward baseline levels by Week 8; no significant changes in levels of other stool SCFAs were observed associated with OV use. Additionally, OV use was associated with a significant increase in stool 2-hydroxybutyrate at Weeks 2 and 4 (adjusted p- value <0.01; ), which again recovered back to baseline level by Week 8. Pairwise dissimilarity analysis of the mucosal transcriptome performed by NMDS revealed clear separation of clustering of patients at baseline and at Week 4 following treatment with OV ( p = 0.004; ). Heatmap analysis of differentially expressed genes revealed clear clusters of downregulated and upregulated genes that appear to correlate with fecal calprotectin levels following OV therapy. Differential gene expression analysis demonstrated a significant change in the expression of 843 genes with an increase in 629 genes and a decrease in 214 genes (FDR p < 0.1). Notable genes that had a decreased expression included those associated with immune-mediated inflammatory responses such as TNF (tumour necrosis factor) receptor superfamily ( TNFRSF6B , TNFSF10 ) and interleukin 1 receptor antagonist ( IL1RN ), cellular apoptosis such as caspase ( CASP5 , CASP10 ), tumor suppressor genes such as phospholipase and acyltransferase ( PLAAT2 ), antimicrobial responses ( RSAD2 , REG1A ), aquaporin ( AQP8 ) and calprotectin ( S100A9 ). Notable genes that demonstrated an increased expression included those involved in prostaglandin biosynthesis ( PTGS2 ), extracellular matrix ( HAPLN1 ), dendritic cell development ( FLT3LG ), lymphocyte trafficking ( ITGA5 , ITGA9 ) and copper-containing amine oxidase ( AOC3 ). Heatmap analysis of the differentially expressed genes following OV treatment showed distinct mucosal gene clusters whose expression correlated with fecal calprotectin levels. Gene ontology pathway analysis revealed significant downregulation of pathways that included antimicrobial humoral response, defence response, immune response, regulation of peptidase activity, oxoacid metabolic process, and bile salt transport (FDR p < 0.1). Conversely, pathways that were significantly increased included extracellular matrix organization, cell surface receptor signaling pathway, response to wounding, signal transduction, and regulation of cell adhesion. Additionally, KEGG pathway analysis identified the downregulation of pathways involved in butanoate and tryptophan metabolism following treatment with OV. This study is the first to comprehensively explore host–microbial mechanisms associated with a reduction in colitis activity, in patients with PSC-IBD treated with OV. Our data demonstrated that after 4 weeks, 80% of patients attained clinical remission, and 100% showed mucosal healing. A significant reduction in fecal calprotectin was observed, with nearly all patients achieving biochemical remission by Week 4. Moreover, clinical response was rapid, with partial Mayo colitis scores dropping within just 2 weeks of treatment. In turn, cessation of OV was associated with an increase in partial Mayo colitis scores and fecal calprotectin values, indicating a lack of sustained response following short-term vancomycin therapy. Another key finding was the rapid and significant changes observed in gut microbial diversity, composition, and function during OV therapy. These changes were evident as early as Week 2 and remained stable until the end of the treatment period. Compositional analysis indicated a significant reduction in species within the Firmicutes and Bacteroidetes phyla, coupled with an increase in Proteobacteria and Fusobacteria phyla. Notably, the reduction in fecal calprotectin levels strongly correlated with a reduced relative abundance of key SCFA-producing species, including Faecalibacterium , Lachnospiraceae , and Roseburia . These metabolically active, immunomodulatory species are traditionally associated with a ‘healthy’ gut state and are consistently shown to be depleted in diseased states, including conventional IBD. Despite the observed reduction in SCFAs, which are also known to regulate colonic Treg homeostasis, we hypothesize that vancomycin’s therapeutic effects may be independent of SCFA-mediated pathways and instead related to the modulation of BA metabolism. In parallel, a recent mouse model of PSC demonstrated that vancomycin exacerbated biliary inflammation, reduced SCFA production, and led to an expansion of pathogenic Escherichia and Enterococcus subsets. Given the observed reduction in SCFAs on metabolic analysis, we hypothesize that vancomycin’s therapeutic effects on colitis activity are independent of SCFA-mediated pathways, and instead related to the modulation of BA metabolism. Indeed, the reported shifts in microbial communities challenge the long-held paradigm that colitis remission must be associated with an increase in SCFA concentration. However, it is important to consider that the increase in Proteobacteria and reduction in SCFA-producing species conceivably represent collateral changes, rather than direct contributors to the efficacy of OV. , As clinical and biochemical improvements occurred despite these changes, OV may target disease mechanisms in PSC-IBD, through pathways unrelated to SCFA production or Proteobacteria levels. For instance, vancomycin’s effect on BA metabolism and microbial–host interactions, specifically its role in altering BA pathways and suppressing pro-inflammatory cytokine production, may account for putative therapeutic benefits. Pathways to colitis remission in PSC-IBD may differ under OV treatment compared with, for instance, currently available biologics and small molecules used to treat classical IBD. Indeed, we observed a significant increase in the relative abundance of A. muciniphila following vancomycin treatment, which diminished after treatment cessation. A. muciniphila has been implicated in improving gut barrier function by reducing plasma endotoxin levels and downregulating inflammatory responses. , While this observation warrants further investigation, it is possible that the increased abundance of A. muciniphila is a manifestation of its intrinsic resistance to vancomycin rather than a direct therapeutic effect. , Our study also uncovered significant reductions in microbial pathways involved in BA conversion and mannose biosynthesis, with as little as 2 weeks of OV therapy. Reciprocally, we observed an upregulation of genes involved in heme biosynthesis, bacterial chemotaxis, and gut barrier regulation, such as the biosynthesis of multiple medium-chain fatty acids. These changes remained stable throughout the course of treatment and were partially restored after vancomycin withdrawal. Fecal SCFA profiling revealed an increase in 2-hydroxybutyrate levels, along with a reduction in butyrate and valerate, which returned to baseline after 4 weeks of therapy. The discordance between fecal calprotectin and the presence of specific pathogenic bacteria like F. nucleatum and Klebsiella pneumoniae warrants further exploration, as these shifts typically suggest increased inflammation. However, the reduction in calprotectin and observed clinical improvements suggest that the aforementioned microbial changes may not be the primary drivers of efficacy under OV treatment. Previous studies have shown that OV was associated with changes in specific BA profiles, including a reduction in secondary BAs, which may contribute to modulating the inflammatory response in PSC-IBD. Taken together, this suggests that the observed reduction in fecal calprotectin, partial Mayo colitis scores and endoscopic colitis activity are likely driven by these BA-related mechanisms. Further investigation is necessary to clarify these pathways; ideally via randomized controlled trials, and with rigorous comparisons between calprotectin levels, microbial shifts, and serial colonoscopic assessment. Vancomycin treatment resulted in a reduction in the expression of genes and pathways involved in pro-inflammatory immune responses, including TNF-α, IL-1 receptor, and antimicrobial genes such as REG1A and caspase-10. This was alongside downregulation in genes involved in intracellular BA transport and binding, such as FABP1 and SLC9A3, but not FXR or FGF19. These findings were consistent with our fecal BA profiling, with a marked reduction in secondary BAs in as early 2 weeks, and a subsequent rise to baseline values on treatment withdrawal. We also found a significant reduction in secondary BAs, including derivatives of DCA and LCA, with a commensurate increase in primary BAs. These findings are critically important, given the growing body of evidence to show how 3-oxoLCA and isoalloLCA inhibit the differentiation of effector Th17 cells, while isoDCA promotes the differentiation of regulatory T cells. The FDR threshold of p < 0.1 used in this study was chosen to balance sensitivity and exploratory goals. While this allows for hypothesis generation, it may increase the risk of false positives, and future validation studies are needed to confirm these findings. With this caveat in mind, we observed a reduction in the expression of key microbial genes, such as choloylglycine hydrolase (bile salt hydrolase, BSH) and hydroxysteroid dehydrogenase (HSD), following vancomycin treatment. These enzymes play a critical role in the conversion of primary to secondary BAs and significantly correlate with fecal calprotectin levels. The loss of secondary BAs also correlated with a reduction in colonic inflammation, indicating that imbalances in homeostasis may drive gut inflammation. , , , Similar findings have been reported by others, with 1 study demonstrating a reduction in secondary BAs and improved insulin sensitivity following 1 week of OV therapy. While the trial provides important insights into host–microbial changes associated with vancomycin therapy, several limitations need to be addressed. Our study’s cohort size was small, and the lack of a control group means that study findings need to be interpreted with caution. However, ours was designed as an exploratory open-label trial with mechanistic endpoints, to assess the short-term effects of OV in patients with PSC-IBD; specifically with regard to colitis activity. As such, liver transplant recipients were included by design, given that approximately 40% develop flares in colitis activity, and the fact that gut microbial changes persist when compared with individuals without PSC. , We nevertheless acknowledge that this introduces potential variability, especially related to BA changes, and the effects of post-transplant immunosuppression, which could influence readouts from omics’ data. Secondly, a full colonoscopy is preferred in the assessment of PSC-IBD activity, due to some patients having a predominance of right-sided colitis. However, it was deemed inappropriate by our patient and public involvement group to perform 2 consecutive colonoscopies (baseline and 4 weeks post-intervention) so close together. Regardless, all study participants had evident recto-sigmoid inflammation at baseline, making sigmoidoscopy a pragmatic and less invasive option to assess colitis activity and perform gut mucosal sampling. The observed improvements in clinical and biochemical markers also suggest that sigmoidoscopy was sufficient for assessing treatment response in the short term. However, future longer-term studies should consider a full colonoscopy for a more comprehensive evaluation, especially in PSC with right-sided colonic disease. It must also be stressed that the gene expression dataset, although modest in size, was used for exploratory hypothesis generation, focusing on known pathways of interest, such as immune regulation, inflammation, and BA metabolism. Larger datasets are typically preferred for pathway identification, but the selected genes were relevant to the study’s objectives. While we demonstrate on- versus off-treatment (washout) effects, ours was an open-label, uncontrolled study. Thus, any improvements in clinical, endoscopic, and biochemical markers should be viewed as associations with vancomycin therapy rather than definitive evidence of a treatment effect. Future randomized controlled trials (RCTs) are necessary to confirm the therapeutic effects, ideally over longer treatment periods. Moreover, the reductions in serum ALP and ALT (while encouraging) are by no means definitive evidence of benefit. Randomized controlled trials of BA therapy and anti-fibrotics have shown meaningful reductions in biochemistry, albeit with large intraindividual variability over time, and no real association with harder efficacy outcome measures. This highlights the need to incorporate a broader range of biomarkers in future trials of OV therapy, including, for instance, the enhanced liver fibrosis score, transient elastography, and quantitative measures of ductal disease severity. , , No incidents of vancomycin-resistant Enterococcus were detected under OV treatment which is consistent with prior studies in the pediatric literature. Nevertheless, the potential risk of antimicrobial resistance under long-term therapy warrants further study and is essential for later-phase trials of OV in PSC. This is particularly relevant given that biliary fibrosis is augmented by OV in animal models, in line with reduced SCFA production, and an expansion of pathogenic Escherichia and Enterococcus subsets. Lastly, the depletion of BSH-producing species may have confounded BA findings, as their loss could affect BA composition independently of OV treatment. Moving forward, the next phase would be to conduct a randomized trial of OV in patients with PSC-IBD, against an appropriately matched placebo-control group. It is important to dissect whether the therapeutic effects of vancomycin are PSC-IBD specific, or whether they can be generalized to IBD patients who do not have PSC. Of interest, previous double-blind RCTs of vancomycin in non-PSC IBD patients did not demonstrate significant benefit over placebo, lending further support to the notion that PSC is associated with a unique form of colitis distinct from UC or CD alone. In conclusion, OV appears to induce clinical remission in PSC-IBD, likely through modulation of BA metabolism and gut microbial function. While study findings offer insight into the effects of OV, this is but one part of understanding disease pathogenesis. Given the variability in treatment responses, and the heterogenous nature of PSC (including some patients who do not have IBD), further research is needed to explore mechanistic links between mucosal immunity, liver autoimmunity, and chronic biliary disease. jjae189_suppl_Supplementary_Figure_S1 jjae189_suppl_Supplementary_Figure_S2 jjae189_suppl_Supplementary_Table_S1 jjae189_suppl_Supplementary_Table_S2 jjae189_suppl_Supplementary_Figure_Caption |
Association between circulating leukocytes and arrhythmias: Mendelian randomization analysis in immuno-cardiac electrophysiology | d9d4c25f-bb86-465d-822e-bb362a5b7bb7 | 10113438 | Physiology[mh] | Introduction Cardiac arrhythmia, a condition in which the heartbeat is irregular, too fast or too slow, is a relatively common heart disease. Some arrhythmias are brief and asymptomatic, while others are persistent and can lead to hemodynamic instability, thromboembolic events, and even cardiac sudden death, imposing a significant burden on healthcare systems. However, evidence on how to effectively prevent and treat arrhythmias, thus far, has been limited. Although how exactly leukocytes participate in arrhythmogenesis is not fully understood, it is generally accepted that leukocytes might contribute to arrhythmia either directly through coupling to cardiomyocytes, or indirectly by producing cytokines and antibodies . Neutrophils, which are rarely found in the healthy myocardium, are rapidly recruited to the heart in response to stress signals, and they exert the arrhythmogenic effect by releasing myeloperoxidase and lipocalin , promoting oxidative stress and interstitial fibrosis. Monocytes/macrophages are the most numerous leukocytes in the heart, and can effectively clear dysfunctional mitochondria, apoptotic cells and debris, thus preventing ventricular tachycardia and fibrillation after myocardial infarction . However, macrophages can also secrete cytokines like IL-1β, which then prolong the action potential duration of cardiomyocytes and induces arrhythmias . Recent research has uncovered a non-canonical leukocyte function for macrophages in cardiac electrical conduction, demonstrating that they can directly couple to conducting cardiomyocytes via gap junctions containing Cx43, altering their electrical properties . T lymphocytes elicit cell-mediated immunity, and subsets of T lymphocytes may produce cytokines like IFN-γ, IL-2 or IL-17, exacerbating the neutrophilic inflammation, then promote micro-scar formation among myocardial tissue, leading to insulating fibrosis . B lymphocytes can promote cardiac arrhythmias by means of autoantibodies targeting specific calcium, potassium, or sodium channels on the surface of cardiomyocytes . Last, less is known about the function of basophils and eosinophils in arrhythmias, but recent experimental data highlight that basophil-derived IL-14 plays an essential role in the heart, by balancing macrophage polarization. While eosinophils may play an anti-inflammatory and cardioprotective role after myocardial infarction, reducing cardiomyocyte death and inflammatory cell accumulation . Due to these pioneering works, researchers have attempted to integrate electrophysiology and immunology, and a new terminology “immuno-cardiac electrophysiology” was introduced to highlight the emerging essential role of immune cells in arrhythmias . Circulating leukocytes are crude markers of the systemic immunological status of individuals, and they can modulate local inflammatory responses. Cellular numbers are the most critical parameter for homeostasis of circulating immune cells. So far, some cross-sectional clinical surveys have linked circulating leukocyte counts to incidence of cardiac arrhythmias. In the CALIBER study of 775,231 individuals, high neutrophil count , low eosinophil count, and low lymphocyte count were associated with ventricular arrhythmia. In the Framingham Heart Study, white blood cell counts correlated with risk of atrial fibrillation . Other studies have linked risk of atrial fibrillation to eosinophil count and proportion of monocyte subsets . However, that literature does not definitively establish a role for leukocyte counts in the pathogenesis of arrhythmias because observational studies are prone to residual unmeasured confounding and reverse causation. Of particular concern is the potential for reverse causation. Atrial fibrillation itself might promote systemic inflammation during atrial remodeling and induce a spurious inverse association. In addition, observational studies have come to conflicting conclusions about the association of leukocyte counts with supraventricular tachycardia . Therefore, evidence from observational studies alone is insufficient. The causal effect of leukocyte counts on the risk of arrhythmias remains unknown. And additional studies are needed to characterize the role of each immune cell subtype in different types of arrhythmias. Addressing these causal questions can accelerate the discovery of mechanisms underlying disease and open new prevention and treatment avenues. Mendelian randomization (MR) is an epidemiologic approach that strives to address some key limitations of observational studies, such as confounding and reverse causation . It uses genetic variants, usually single-nucleotide polymorphisms (SNPs), as proxies for clinical interventions (as a result of exposure) in order to assess whether the genetic variants are associated with the outcome. In this way, MR supports inferences about causality , placing it at the interface between traditional observational epidemiology and interventional trials . MR should be robust to confounders, given that alleles are randomly distributed at conception, and it should be robust to reverse causation, since an individual’s genetic code is fixed at birth, before the outcome of interest. In the present study, two-sample MR was used to estimate whether leukocyte counts cause changes in arrhythmia risk, based on summary data in genome-wide association studies (GWAS). Methods 2.1 Study design For the current study, we conducted two-sample MR analysis of circulating leukocyte counts on arrhythmias using data from publicly available GWAS. The five subtypes of leukocytes were considered: neutrophils, eosinophils, basophils, monocytes, and lymphocytes. Arrythmia was defined as all types in aggregate or as one of the following five specific types: atrial fibrillation, atrioventricular block, left bundle branch block (LBBB), right bundle branch block (RBBB), and paroxysmal tachycardia. All study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. Ethics approval was considered unnecessary for the present study because the included GWAS reported appropriate ethical approval from their respective institutions, and the present analyses were performed only on summary-level data. 2.2 Selection of genetic instruments for circulating leukocyte counts We extracted summary statistics from the largest meta-analyzed GWAS data provided by the Blood Cell Consortium . The Blood Cell Consortium Phase 2 includes 563,946 European participants from 26 GWAS cohorts, after excluding patients with blood cancer, acute medical/surgical illness, myelodysplastic syndrome, bone marrow transplant, congenital/hereditary anemia, HIV, end-stage kidney disease, splenectomy, cirrhosis or extreme blood cell counts. An overview of the data sources is provided in , and more detail is available in the original article . SNPs associated with the counts of the five leukocyte counts were selected at the genome-wide significance level ( P <5×10 –8 ) and defined as genetic instruments. To ensure that SNPs were independent, a clumping procedure was performed, and the SNPs were pruned at a stringent linkage disequilibrium (LD) of R 2 < 0.001 within a 10,000-kb window. The proportions of variance in respective leukocyte counts explained by the selected SNPs were estimated , and F-statistics were calculated as measures of instrument strength . The F value for all genetic instruments was > 10, ensuring that weak bias would be <10% at least 95% of the time . 2.3 Data sources for arrhythmia To more thoroughly evaluate the association of leukocyte counts and the risk of arrhythmias, we aimed to include all eligible GWAS of arrhythmias by extensively searching the public Integrative Epidemiology Unit (IEU) GWAS database ( https://gwas.mrcieu.ac.uk/ ). We selected GWAS with the largest samples, leading to seven GWAS whose summary statistics for different types of arrhythmias were used in the present study. Genetic association estimates for the outcome of all types of arrhythmia were obtained from the UK Biobank (UK Biobank field ID 20002, value 1077), based on the UKB GWAS pipeline set up for the MRC IEU. We restricted the analytical cohort to individuals of European descent. Individuals with cardiac arrhythmia were identified via self-report during a face-to-face interview with a trained nurse. The GWAS dataset on atrial fibrillation was obtained from a meta-analysis comprising 1,030,836 participants of European ancestry . Cases of atrial fibrillation were defined as those patients with paroxysmal atrial fibrillation, permanent atrial fibrillation, or atrial flutter. Summary data for the other four types of arrhythmias (atrioventricular block, LBBB, RBBB, and paroxysmal tachycardia) were retrieved from the FinnGen project (release 2), where cases were defined as those assigned the corresponding ICD-10 diagnosis codes. Specifically, cases of atrioventricular block were defined as patients with first degree (ICD10:I440), second degree (I441), third degree atrioventricular block (I442) or other unspecified atrioventricular block (I443). LBBB included left anterior fascicular block (I444), left posterior fascicular block(I445), other fascicular block (I446) and unspecified LBBB (I447). While RBBB inlcuded right fascicular block (I450) and other RBBB (I451). And the term paroxysmal tachycardia referred to re-entry ventricular, supraventricular, ventricular tachycardia and unspecified paroxysmal tachycardia (I47). The FinnGen project included 102,739 Finnish participants and combined genetic data from Finnish biobanks and health records from Finnish health registries. Further details on data sources are included in . Prior to the MR analyses, we harmonized the SNPs identified from exposure GWAS with SNPs in outcome GWAS in order to align alleles on the same strand. 2.4 Statistical analyses We used the inverse-variance weighted (IVW) method as the primary analysis. Then we applied a range of sensitivity analyses to assess the robustness of the IVW findings against potential violations, including MR-Egger, weighted median, MR-PRESSO and multivariable MR (MVMR) analyses. Although these methods have relatively low statistical efficiency on their own, they have different theoretical properties to control for different types of biases, and they are robust to certain assumption violations. The IVW method (random effects model) can provide the greatest statistical power , assuming all genetic instruments are valid. This method is equivalent to a weighted linear regression of the SNP-exposure effects against the SNP-outcome effects, with the intercept constrained to zero. Owing to this constraint, it can lead to a relatively high rate of false positives in the presence of horizontal pleiotropy. Cochran’s Q statistic from IVW analysis was used for global heterogeneity testing. Based on the notion that pleiotropy is one of the main sources of heterogeneity, low heterogeneity (Cochran’s Q p > 0.05) implies the minor possibility of pleiotropy. MR-Egger regression is performed similarly as IVW, except the intercept is not fixed to zero . Therefore, the slope coefficient of MR-Egger regression gives an adjusted causal estimate, even when pleiotropy is present. The intercept of MR-Egger regression is an indicator of average pleiotropic effect across the genetic variants. An intercept of zero associated with P > 0.05 was considered evidence for absence of pleiotropic bias. The weighted median method is a consensus approach that takes the median of the ratio estimate distribution as the overall causal estimate. It has the advantage that it provides unbiased estimates when more than 50% of the weight comes from valid variants. It is less affected when a few genetic variants have pleiotropic effects, and it can be viewed as an implicit outlier removal approach. The MR-PRESSO , a newly proposed MR method, is a variation on the IVW method. MR-PRESSO global test is used to assess the presence of overall horizontal pleiotropy. If pleiotropy is detected, the MR-PRESSO outlier test allows the detection of individual pleiotropic outliers through calculation of the residual sum of squares. Finally, the causal estimate is obtained by applying the IVW method to the genetic variants remaining after exclusion of outliers. Steiger filtering , which computes the amount of variance each SNP explains in the exposure and in the outcome variable, identifies variant instruments that are likely to reflect reverse causation. When significant horizontal pleiotropy was detected, we also used Cook’s distance to identify outliers. Cook’s distance identifies SNPs that exert disproportionate influences on the overall estimates as outliers. MVMR, an extension of the standard MR approach, considers multiple correlated exposures within a single model, allowing the disentanglement of independent associations of each exposure with the outcome. This method was performed while considering associations of SNPs with diabetes mellitus (DM), hypertension, and coronary artery disease (CAD) as covariates in order to estimate the direct effects of leukocyte counts independently of risk factors known to influence risk of arrhythmia. Given the strong correlations between leukocyte subtypes, we also performed MVMR to determine the effect of each of the five leukocyte subtypes separately on arrhythmia, after adjusting for the effects of the other four subtypes. We performed reverse-direction MR analysis to evaluate whether there is genetic evidence for the possibility that arrhythmia alters circulating leukocyte counts. Because we detected few genome-wide significant SNPs for arrhythmias (defined as p < 5×10 –8 ), we used a less stringent statistical threshold (p<1×10 –5 ) to select genetic instruments . In fact, we were unable to detect eligible SNPs associated with the aggregated occurrence of all types of arrhythmia, even at the suggestive level of p < 1×10 –5 , so this outcome was not included in the analysis. In this reverse-direction analysis, IVW, MR-Egger and weighted median analyses were performed as described above. All statistical analyses were conducted using the TwoSampleMR, MendelianRandomization, and MR-PRESSO packages in R (version 4.0.3). Effect estimates for dichotomous outcomes were reported as odds ratios (ORs) with corresponding 95% confidence intervals (CIs). 2.5 Interpretation of results Normally, Bonferroni-corrected p values are used to adjust for multiple testing. However, given the large number of arrhythmia outcomes and leukocyte counts in the study, we judged this correction procedure to be unnecessarily conservative . Therefore, we applied the conventional p value threshold of 0.05, and we interpreted p values near 0.05 with caution. We considered casual associations to be strongly supported if the following four criteria were satisfied. (1) Primary IVW analysis gave a statistically significant causal estimate (p < 0.05). (2) All sensitivity analyses yielded concordant estimates, despite making different assumptions. (3) No evidence of unbalanced horizontal pleiotropy was observed, defined as p >0.05 for Cochran’s Q statistic, MR–Egger intercept test and MR-PRESSO global pleiotropy test. (4) No evidence of reverse causation from arrhythmias to leukocyte differential counts was observed, defined as p > 0.05 in the IVW, MR-Egger, and weighted median analyses in reverse MR analysis. Study design For the current study, we conducted two-sample MR analysis of circulating leukocyte counts on arrhythmias using data from publicly available GWAS. The five subtypes of leukocytes were considered: neutrophils, eosinophils, basophils, monocytes, and lymphocytes. Arrythmia was defined as all types in aggregate or as one of the following five specific types: atrial fibrillation, atrioventricular block, left bundle branch block (LBBB), right bundle branch block (RBBB), and paroxysmal tachycardia. All study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. Ethics approval was considered unnecessary for the present study because the included GWAS reported appropriate ethical approval from their respective institutions, and the present analyses were performed only on summary-level data. Selection of genetic instruments for circulating leukocyte counts We extracted summary statistics from the largest meta-analyzed GWAS data provided by the Blood Cell Consortium . The Blood Cell Consortium Phase 2 includes 563,946 European participants from 26 GWAS cohorts, after excluding patients with blood cancer, acute medical/surgical illness, myelodysplastic syndrome, bone marrow transplant, congenital/hereditary anemia, HIV, end-stage kidney disease, splenectomy, cirrhosis or extreme blood cell counts. An overview of the data sources is provided in , and more detail is available in the original article . SNPs associated with the counts of the five leukocyte counts were selected at the genome-wide significance level ( P <5×10 –8 ) and defined as genetic instruments. To ensure that SNPs were independent, a clumping procedure was performed, and the SNPs were pruned at a stringent linkage disequilibrium (LD) of R 2 < 0.001 within a 10,000-kb window. The proportions of variance in respective leukocyte counts explained by the selected SNPs were estimated , and F-statistics were calculated as measures of instrument strength . The F value for all genetic instruments was > 10, ensuring that weak bias would be <10% at least 95% of the time . Data sources for arrhythmia To more thoroughly evaluate the association of leukocyte counts and the risk of arrhythmias, we aimed to include all eligible GWAS of arrhythmias by extensively searching the public Integrative Epidemiology Unit (IEU) GWAS database ( https://gwas.mrcieu.ac.uk/ ). We selected GWAS with the largest samples, leading to seven GWAS whose summary statistics for different types of arrhythmias were used in the present study. Genetic association estimates for the outcome of all types of arrhythmia were obtained from the UK Biobank (UK Biobank field ID 20002, value 1077), based on the UKB GWAS pipeline set up for the MRC IEU. We restricted the analytical cohort to individuals of European descent. Individuals with cardiac arrhythmia were identified via self-report during a face-to-face interview with a trained nurse. The GWAS dataset on atrial fibrillation was obtained from a meta-analysis comprising 1,030,836 participants of European ancestry . Cases of atrial fibrillation were defined as those patients with paroxysmal atrial fibrillation, permanent atrial fibrillation, or atrial flutter. Summary data for the other four types of arrhythmias (atrioventricular block, LBBB, RBBB, and paroxysmal tachycardia) were retrieved from the FinnGen project (release 2), where cases were defined as those assigned the corresponding ICD-10 diagnosis codes. Specifically, cases of atrioventricular block were defined as patients with first degree (ICD10:I440), second degree (I441), third degree atrioventricular block (I442) or other unspecified atrioventricular block (I443). LBBB included left anterior fascicular block (I444), left posterior fascicular block(I445), other fascicular block (I446) and unspecified LBBB (I447). While RBBB inlcuded right fascicular block (I450) and other RBBB (I451). And the term paroxysmal tachycardia referred to re-entry ventricular, supraventricular, ventricular tachycardia and unspecified paroxysmal tachycardia (I47). The FinnGen project included 102,739 Finnish participants and combined genetic data from Finnish biobanks and health records from Finnish health registries. Further details on data sources are included in . Prior to the MR analyses, we harmonized the SNPs identified from exposure GWAS with SNPs in outcome GWAS in order to align alleles on the same strand. Statistical analyses We used the inverse-variance weighted (IVW) method as the primary analysis. Then we applied a range of sensitivity analyses to assess the robustness of the IVW findings against potential violations, including MR-Egger, weighted median, MR-PRESSO and multivariable MR (MVMR) analyses. Although these methods have relatively low statistical efficiency on their own, they have different theoretical properties to control for different types of biases, and they are robust to certain assumption violations. The IVW method (random effects model) can provide the greatest statistical power , assuming all genetic instruments are valid. This method is equivalent to a weighted linear regression of the SNP-exposure effects against the SNP-outcome effects, with the intercept constrained to zero. Owing to this constraint, it can lead to a relatively high rate of false positives in the presence of horizontal pleiotropy. Cochran’s Q statistic from IVW analysis was used for global heterogeneity testing. Based on the notion that pleiotropy is one of the main sources of heterogeneity, low heterogeneity (Cochran’s Q p > 0.05) implies the minor possibility of pleiotropy. MR-Egger regression is performed similarly as IVW, except the intercept is not fixed to zero . Therefore, the slope coefficient of MR-Egger regression gives an adjusted causal estimate, even when pleiotropy is present. The intercept of MR-Egger regression is an indicator of average pleiotropic effect across the genetic variants. An intercept of zero associated with P > 0.05 was considered evidence for absence of pleiotropic bias. The weighted median method is a consensus approach that takes the median of the ratio estimate distribution as the overall causal estimate. It has the advantage that it provides unbiased estimates when more than 50% of the weight comes from valid variants. It is less affected when a few genetic variants have pleiotropic effects, and it can be viewed as an implicit outlier removal approach. The MR-PRESSO , a newly proposed MR method, is a variation on the IVW method. MR-PRESSO global test is used to assess the presence of overall horizontal pleiotropy. If pleiotropy is detected, the MR-PRESSO outlier test allows the detection of individual pleiotropic outliers through calculation of the residual sum of squares. Finally, the causal estimate is obtained by applying the IVW method to the genetic variants remaining after exclusion of outliers. Steiger filtering , which computes the amount of variance each SNP explains in the exposure and in the outcome variable, identifies variant instruments that are likely to reflect reverse causation. When significant horizontal pleiotropy was detected, we also used Cook’s distance to identify outliers. Cook’s distance identifies SNPs that exert disproportionate influences on the overall estimates as outliers. MVMR, an extension of the standard MR approach, considers multiple correlated exposures within a single model, allowing the disentanglement of independent associations of each exposure with the outcome. This method was performed while considering associations of SNPs with diabetes mellitus (DM), hypertension, and coronary artery disease (CAD) as covariates in order to estimate the direct effects of leukocyte counts independently of risk factors known to influence risk of arrhythmia. Given the strong correlations between leukocyte subtypes, we also performed MVMR to determine the effect of each of the five leukocyte subtypes separately on arrhythmia, after adjusting for the effects of the other four subtypes. We performed reverse-direction MR analysis to evaluate whether there is genetic evidence for the possibility that arrhythmia alters circulating leukocyte counts. Because we detected few genome-wide significant SNPs for arrhythmias (defined as p < 5×10 –8 ), we used a less stringent statistical threshold (p<1×10 –5 ) to select genetic instruments . In fact, we were unable to detect eligible SNPs associated with the aggregated occurrence of all types of arrhythmia, even at the suggestive level of p < 1×10 –5 , so this outcome was not included in the analysis. In this reverse-direction analysis, IVW, MR-Egger and weighted median analyses were performed as described above. All statistical analyses were conducted using the TwoSampleMR, MendelianRandomization, and MR-PRESSO packages in R (version 4.0.3). Effect estimates for dichotomous outcomes were reported as odds ratios (ORs) with corresponding 95% confidence intervals (CIs). Interpretation of results Normally, Bonferroni-corrected p values are used to adjust for multiple testing. However, given the large number of arrhythmia outcomes and leukocyte counts in the study, we judged this correction procedure to be unnecessarily conservative . Therefore, we applied the conventional p value threshold of 0.05, and we interpreted p values near 0.05 with caution. We considered casual associations to be strongly supported if the following four criteria were satisfied. (1) Primary IVW analysis gave a statistically significant causal estimate (p < 0.05). (2) All sensitivity analyses yielded concordant estimates, despite making different assumptions. (3) No evidence of unbalanced horizontal pleiotropy was observed, defined as p >0.05 for Cochran’s Q statistic, MR–Egger intercept test and MR-PRESSO global pleiotropy test. (4) No evidence of reverse causation from arrhythmias to leukocyte differential counts was observed, defined as p > 0.05 in the IVW, MR-Egger, and weighted median analyses in reverse MR analysis. Results 3.1 Circulating leukocyte counts and heart arrhythmias: Primary results First, we investigated the causal effect of each leukocyte subtype count on arrhythmias using IVW methods with multiplicative random effects. The IVW approach is recommended as the primary method in MR analysis because it is optimally efficient when all genetic variants are valid . The results of IVW analysis are presented in . We did not find clear evidence supporting causal effects of any leukocyte subtype counts on the overall occurrence of all-type arrhythmia . Nevertheless, there was evidence that different leukocyte subtype counts causally affected three specific types of arrhythmias. A genetically estimated 1-standard deviation increase in lymphocyte count was associated with 46% higher risk of atrioventricular block (OR 1.46, 95% CI 1.11–1.93, p=0.0065). We also found moderate evidence for causal effects of basophil count on atrial fibrillation (OR 1.08, 95% CI 1.01–1.58, p=0.0237), and neutrophil count on RBBB (OR 2.32, 95% CI 1.11–4.86, p=0.0259). No significant associations were observed for the other outcomes. 3.2 Sensitivity analyses of positive results We assessed the robustness of the significant causal estimates from the above IVW analysis using sensitivity analyses. These sensitivity analyses are generally considered less powerful than the conventional IVW approach, but robust to different forms of biases (see Methods). Therefore we conducted MR-Egger, MR-PRESSO, weighted median, Steiger filtering and multivariable MR analyses on the following three combinations of exposure and outcome: (1) lymphocyte count and atrioventricular block, (2) neutrophil count and RBBB, and (3) basophil count and atrial fibrillation. 3.2.1 Lymphocyte count and atrioventricular block Sensitivity analyses supported the causal link between lymphocyte count and atrioventricular block : the MR-Egger approach indicated an OR 1.95 (95% CI 1.12-3.39; p=0.019), and the weighted median approach indicated an OR 1.76 (95% CI 1.20-2.78; p=0.015). With respect to pleiotropy detection, Cochran’s Q test gave a p value of 0.586, suggesting no evidence of heterogeneity between genetic instruments and therefore no pleiotropy. Similarly, bias due to pleiotropy was not detectable in the IVW analyses, based on a p value of 0.249 for the MR-Egger intercept test and p value of 0.566 for MR-PRESSO global pleiotropy test. Additionally, the absence of outliers detected through the Steiger filtering reinforced this conclusion . Using MVMR analysis, we confirmed this causal relationship after adjusting for risk factors of arrhythmia (CAD, DM and hypertension) and for effects from the other four subtypes of leukocytes . 3.2.2 Neutrophil count and RBBB The weighted median method (OR 3.40, 95% CI 1.02–11.28; p=0.049) and MR-Egger method (OR 3.13, 95% CI 0.64–15.32; p=0.158) produced results similar with those of the primary IVW analysis . But these CIs were wide and the p values near 0.05 or above 0.05, likely due to lack of statistical power. There was no indication of heterogeneity or pleiotropy in the corresponding Cochran’s Q test (p =0.200), MR-Egger intercept test (p=0. 674) or MR-PRESSO global pleiotropy test (p=0.209). And Steiger filtering did not detect any outliers . In MVMR analysis, accounting for counts of lymphocytes and eosinophils abolished the direct effect of neutrophil count on RBBB . Together, these analyses suggest no direct, independent effect of neutrophil counts on the risk of RBBB. 3.2.3 Basophil count and atrial fibrillation For basophil count and atrial fibrillation, the issue of horizontal pleiotropy is a particular concern . Although the intercept estimated from the MR-Egger regression was centered around zero (−0.0004, p=0.801), and Steiger filtering did not identify any outliers , we determined the presence of overall horizontal pleiotropy among all genetic instruments using MR-PRESSO (global pleiotropy p<0.001). After removing five outlier SNPs, the causal estimate of basophil count on atrial fibrillation no longer achieved statistical significance (MR-PRESSO outlier correction p=0.106). Similarly, effect estimates from MR-Egger and weighted median analyses were not significant. In conclusion, these analyses suggest that the estimate from IVW analysis may be strongly affected by pleiotropy, and that no compelling evidence exists in support of a causal association between basophil count and atrial fibrillation. 3.3 Sensitivity analyses of negative results To reduce the incidence of false negative findings, sensitivity analyses (MR-Egger, weighted median, Steiger filtering, MR-PRESSO) were also performed to assess the validity of negative results. Empirically, we focused on the causal relationships of lymphocyte count or neutrophil count with atrial fibrillation. For lymphocyte count and atrial fibrillation, both MR-Egger and MR-PRESSO methods gave negative, non-significant estimates similar to those of the IVW analysis . Only weighted median analysis showed a significant, albeit small, effect. Steiger filtering identified one outlier SNP, but the results of above analysis remained essentially unchanged after removing the outlier . The finding from weighted median analysis alone is insufficient evidence. Overall, we conclude the absence of strong evidence for a causal association between lymphocyte count and risk of atrial fibrillation. For neutrophil count and atrial fibrillation, the MR-Egger and weighted median analyses showed a statistically significant causal estimate, which was inconsistent with the IVW analysis . These two methods are perceived as methods that have natural robustness to pleiotropy. Meanwhile, we found evidence of pleiotropy based on the p values for the MR-Egger intercept test (p=0.013) and MR-PRESSO global pleiotropy test(p<0.001), as well as evidence of substantial heterogeneity based on the p value for Cochran’s Q statistic (p<0.001). We suspect that pleiotropy biased the effect estimate towards null in the IVW analysis, even if pleiotropy more often biases estimates away from null. To remove potential pleiotropy as much as possible, we applied two additional different methods(MR-PRESSO outlier test and Cook’s distance) to further determine and exclude potential outliers. Using MR-PRESSO and Cook’s distance, we identified 9 and 19 outliers, respectively. After removing the outlier SNPs, the causal estimates still did not reach statistical significance . In fact, the estimates were even smaller than before. Taken together, our analyses indicate no compelling evidence for a causal effect of neutrophil count on atrial fibrillation. Sensitivity analyses of the other 29 exposure-outcome combinations yielded negative findings similar to those of the IVW analyses . 3.4 Reverse MR analysis to assess the effect of arrhythmias on leukocyte counts To examine the possibility that reverse causation could be driving our findings, we performed extensive reverse MR analysis in which the risk of arrhythmia was the exposure and counts of the five leukocyte subtypes were the outcome. Although the IVW analysis showed that atrial fibrillation, paroxysmal tachycardia, LBBB and RBBB all had effects on the differential leukocyte counts, the effect sizes were so small that their practical significance is highly questionable . Moreover, these causal effects did not achieve statistical significance in either MR-Egger or weighted median analysis . Therefore, we did not found any robust evidence of reverse associations. In particular, we did not observe causal effects of atrioventricular block on lymphocyte count in IVW method (OR 1.001, 95% CI 0.998–1.004; P p=0.44) . Similar results were observed in MR-Egger and weighted median analyses . Circulating leukocyte counts and heart arrhythmias: Primary results First, we investigated the causal effect of each leukocyte subtype count on arrhythmias using IVW methods with multiplicative random effects. The IVW approach is recommended as the primary method in MR analysis because it is optimally efficient when all genetic variants are valid . The results of IVW analysis are presented in . We did not find clear evidence supporting causal effects of any leukocyte subtype counts on the overall occurrence of all-type arrhythmia . Nevertheless, there was evidence that different leukocyte subtype counts causally affected three specific types of arrhythmias. A genetically estimated 1-standard deviation increase in lymphocyte count was associated with 46% higher risk of atrioventricular block (OR 1.46, 95% CI 1.11–1.93, p=0.0065). We also found moderate evidence for causal effects of basophil count on atrial fibrillation (OR 1.08, 95% CI 1.01–1.58, p=0.0237), and neutrophil count on RBBB (OR 2.32, 95% CI 1.11–4.86, p=0.0259). No significant associations were observed for the other outcomes. Sensitivity analyses of positive results We assessed the robustness of the significant causal estimates from the above IVW analysis using sensitivity analyses. These sensitivity analyses are generally considered less powerful than the conventional IVW approach, but robust to different forms of biases (see Methods). Therefore we conducted MR-Egger, MR-PRESSO, weighted median, Steiger filtering and multivariable MR analyses on the following three combinations of exposure and outcome: (1) lymphocyte count and atrioventricular block, (2) neutrophil count and RBBB, and (3) basophil count and atrial fibrillation. 3.2.1 Lymphocyte count and atrioventricular block Sensitivity analyses supported the causal link between lymphocyte count and atrioventricular block : the MR-Egger approach indicated an OR 1.95 (95% CI 1.12-3.39; p=0.019), and the weighted median approach indicated an OR 1.76 (95% CI 1.20-2.78; p=0.015). With respect to pleiotropy detection, Cochran’s Q test gave a p value of 0.586, suggesting no evidence of heterogeneity between genetic instruments and therefore no pleiotropy. Similarly, bias due to pleiotropy was not detectable in the IVW analyses, based on a p value of 0.249 for the MR-Egger intercept test and p value of 0.566 for MR-PRESSO global pleiotropy test. Additionally, the absence of outliers detected through the Steiger filtering reinforced this conclusion . Using MVMR analysis, we confirmed this causal relationship after adjusting for risk factors of arrhythmia (CAD, DM and hypertension) and for effects from the other four subtypes of leukocytes . 3.2.2 Neutrophil count and RBBB The weighted median method (OR 3.40, 95% CI 1.02–11.28; p=0.049) and MR-Egger method (OR 3.13, 95% CI 0.64–15.32; p=0.158) produced results similar with those of the primary IVW analysis . But these CIs were wide and the p values near 0.05 or above 0.05, likely due to lack of statistical power. There was no indication of heterogeneity or pleiotropy in the corresponding Cochran’s Q test (p =0.200), MR-Egger intercept test (p=0. 674) or MR-PRESSO global pleiotropy test (p=0.209). And Steiger filtering did not detect any outliers . In MVMR analysis, accounting for counts of lymphocytes and eosinophils abolished the direct effect of neutrophil count on RBBB . Together, these analyses suggest no direct, independent effect of neutrophil counts on the risk of RBBB. 3.2.3 Basophil count and atrial fibrillation For basophil count and atrial fibrillation, the issue of horizontal pleiotropy is a particular concern . Although the intercept estimated from the MR-Egger regression was centered around zero (−0.0004, p=0.801), and Steiger filtering did not identify any outliers , we determined the presence of overall horizontal pleiotropy among all genetic instruments using MR-PRESSO (global pleiotropy p<0.001). After removing five outlier SNPs, the causal estimate of basophil count on atrial fibrillation no longer achieved statistical significance (MR-PRESSO outlier correction p=0.106). Similarly, effect estimates from MR-Egger and weighted median analyses were not significant. In conclusion, these analyses suggest that the estimate from IVW analysis may be strongly affected by pleiotropy, and that no compelling evidence exists in support of a causal association between basophil count and atrial fibrillation. Lymphocyte count and atrioventricular block Sensitivity analyses supported the causal link between lymphocyte count and atrioventricular block : the MR-Egger approach indicated an OR 1.95 (95% CI 1.12-3.39; p=0.019), and the weighted median approach indicated an OR 1.76 (95% CI 1.20-2.78; p=0.015). With respect to pleiotropy detection, Cochran’s Q test gave a p value of 0.586, suggesting no evidence of heterogeneity between genetic instruments and therefore no pleiotropy. Similarly, bias due to pleiotropy was not detectable in the IVW analyses, based on a p value of 0.249 for the MR-Egger intercept test and p value of 0.566 for MR-PRESSO global pleiotropy test. Additionally, the absence of outliers detected through the Steiger filtering reinforced this conclusion . Using MVMR analysis, we confirmed this causal relationship after adjusting for risk factors of arrhythmia (CAD, DM and hypertension) and for effects from the other four subtypes of leukocytes . Neutrophil count and RBBB The weighted median method (OR 3.40, 95% CI 1.02–11.28; p=0.049) and MR-Egger method (OR 3.13, 95% CI 0.64–15.32; p=0.158) produced results similar with those of the primary IVW analysis . But these CIs were wide and the p values near 0.05 or above 0.05, likely due to lack of statistical power. There was no indication of heterogeneity or pleiotropy in the corresponding Cochran’s Q test (p =0.200), MR-Egger intercept test (p=0. 674) or MR-PRESSO global pleiotropy test (p=0.209). And Steiger filtering did not detect any outliers . In MVMR analysis, accounting for counts of lymphocytes and eosinophils abolished the direct effect of neutrophil count on RBBB . Together, these analyses suggest no direct, independent effect of neutrophil counts on the risk of RBBB. Basophil count and atrial fibrillation For basophil count and atrial fibrillation, the issue of horizontal pleiotropy is a particular concern . Although the intercept estimated from the MR-Egger regression was centered around zero (−0.0004, p=0.801), and Steiger filtering did not identify any outliers , we determined the presence of overall horizontal pleiotropy among all genetic instruments using MR-PRESSO (global pleiotropy p<0.001). After removing five outlier SNPs, the causal estimate of basophil count on atrial fibrillation no longer achieved statistical significance (MR-PRESSO outlier correction p=0.106). Similarly, effect estimates from MR-Egger and weighted median analyses were not significant. In conclusion, these analyses suggest that the estimate from IVW analysis may be strongly affected by pleiotropy, and that no compelling evidence exists in support of a causal association between basophil count and atrial fibrillation. Sensitivity analyses of negative results To reduce the incidence of false negative findings, sensitivity analyses (MR-Egger, weighted median, Steiger filtering, MR-PRESSO) were also performed to assess the validity of negative results. Empirically, we focused on the causal relationships of lymphocyte count or neutrophil count with atrial fibrillation. For lymphocyte count and atrial fibrillation, both MR-Egger and MR-PRESSO methods gave negative, non-significant estimates similar to those of the IVW analysis . Only weighted median analysis showed a significant, albeit small, effect. Steiger filtering identified one outlier SNP, but the results of above analysis remained essentially unchanged after removing the outlier . The finding from weighted median analysis alone is insufficient evidence. Overall, we conclude the absence of strong evidence for a causal association between lymphocyte count and risk of atrial fibrillation. For neutrophil count and atrial fibrillation, the MR-Egger and weighted median analyses showed a statistically significant causal estimate, which was inconsistent with the IVW analysis . These two methods are perceived as methods that have natural robustness to pleiotropy. Meanwhile, we found evidence of pleiotropy based on the p values for the MR-Egger intercept test (p=0.013) and MR-PRESSO global pleiotropy test(p<0.001), as well as evidence of substantial heterogeneity based on the p value for Cochran’s Q statistic (p<0.001). We suspect that pleiotropy biased the effect estimate towards null in the IVW analysis, even if pleiotropy more often biases estimates away from null. To remove potential pleiotropy as much as possible, we applied two additional different methods(MR-PRESSO outlier test and Cook’s distance) to further determine and exclude potential outliers. Using MR-PRESSO and Cook’s distance, we identified 9 and 19 outliers, respectively. After removing the outlier SNPs, the causal estimates still did not reach statistical significance . In fact, the estimates were even smaller than before. Taken together, our analyses indicate no compelling evidence for a causal effect of neutrophil count on atrial fibrillation. Sensitivity analyses of the other 29 exposure-outcome combinations yielded negative findings similar to those of the IVW analyses . Reverse MR analysis to assess the effect of arrhythmias on leukocyte counts To examine the possibility that reverse causation could be driving our findings, we performed extensive reverse MR analysis in which the risk of arrhythmia was the exposure and counts of the five leukocyte subtypes were the outcome. Although the IVW analysis showed that atrial fibrillation, paroxysmal tachycardia, LBBB and RBBB all had effects on the differential leukocyte counts, the effect sizes were so small that their practical significance is highly questionable . Moreover, these causal effects did not achieve statistical significance in either MR-Egger or weighted median analysis . Therefore, we did not found any robust evidence of reverse associations. In particular, we did not observe causal effects of atrioventricular block on lymphocyte count in IVW method (OR 1.001, 95% CI 0.998–1.004; P p=0.44) . Similar results were observed in MR-Egger and weighted median analyses . Discussion In this study, using large publicly available genomic datasets, we conducted MR analyses to investigate the causal effects of leukocyte counts on different types of arrhythmias. Our principal findings are that genetically determined high lymphocyte count increases risk of atrioventricular block. In contrast, we did not detect a significant causal effect of either neutrophil or lymphocyte count on risk of atrial fibrillation. Although sparse observational studies have reported relationships between leukocyte counts and some types of arrhythmias, the unique contribution of the present study is that we precisely investigated the association of each differential leukocyte count with five specific types of arrhythmias. In addition, we used MR methods, which help to minimize bias due to confounding factors and reverse causation, allowing us to draw conclusions about causal relationships, not merely associations. Diversity is an intrinsic characteristic of the immune system, which exerts an important influence on an individual’s risk of developing immune mediated diseases. Although the abundance of circulating immune cells is particularly prone to change in the context of infection or injury, it has been demonstrated to be highly variable even among “healthy” individuals . Moreover, evidence has suggested that immune cell composition is associated with risks of cancer and cardiovascular disease among healthy people without prior corresponding diseases, although the exact causal relationship between immune cell composition changes and disease remains unclear. The analyses in the present study were carried out on data in the Blood Cell Consortium, for which mean leukocyte counts were within the normal range . Thus, our results may support the potential of leukocyte counts for predicting assessing arrhythmia risk in disease-free individuals. High-degree atrioventricular block is the leading reason for pacemaker implantation. First-degree atrioventricular block, previously thought to be associated with a favorable prognosis, may actually be linked to adverse cardiovascular outcomes and increased mortality . However, due to the unknown mechanism of atrioventricular block, prevention and non-invasive treatment strategies are largely lacking in clinical practice. In particular, whether changes in circulating leukocyte components affect the risk of developing atrioventricular block remains unclear, as is the question of which types of leukocyte exert greater influence on atrioventricular block. Macrophages have been implicated in the disorder: they are abundant at the atrioventricular node and affect its physiological function though electrical coupling with cardiomyocytes . However, the current study did not find evidence supporting a causal effect of circulating monocyte count on atrioventricular block. We assume that this discrepancy stems from the fact that most cardiac macrophages, especially those resident in the atrioventricular node, populate the heart during embryogenesis and self-maintain locally with minimal exchange with the population of circulating monocytes . On the other hand, our results revealed that genetically determined high lymphocyte count increases the risk of atrioventricular block. To the best of our knowledge, data on the impact of lymphocyte on atrioventricular block are scarce. The etiology of atrioventricular block is related to fibrosis of the conduction system , electrical remodeling of atrioventricular node myocytes , and elevated vagal tone . Depending on the types of cells involved, it is speculated that lymphocytes may affect atrioventricular conduction in various ways. For instance, by secreting cytokines, lymphocytes can regulate monocyte/macrophage recruitment and differentiation . As previously mentioned, macrophages can directly affect the action potential of cardiomyocytes through gap junctions . Additionally, lymphocytes can promote fibroblast activation by secreting inflammatory mediators , leading to fibrosis in the atrioventricular node area and subsequent electrical isolation. Moreover, it may be possible that during cardiac injury, endogenous antigens in the conduction system are exposed, triggering the proliferation of autoreactive T and B cells and subsequent damage to atrioventricular node myocytes. Finally, it is worth investigating whether lymphocytes can directly couple to cardiomyocytes or produce autoantibodies that cross-react with ion channels in cardiomyocytes and ultimately affect their action potential. In conclusion, our results justify detailed studies into the role of lymphocytes in the pathogenesis of atrioventricular block, as well as their utility as a biomarker in disease risk assessment. Atrial fibrillation is the most common arrhythmia, and it increases the risk of stroke, heart failure and mortality . Previous observational studies have reported links between the disorder and high ratios of circulating neutrophils to lymphocytes . Animal studies further support that atrial fibrillation involves atrial infiltration by neutrophils . However, we did not find any significant association between genetically predicted neutrophil or lymphocyte counts and atrial fibrillation. In particular, although our effect estimates for neutrophil counts were directionally concordant with the results from observational studies, the effect sizes were small and the CIs wide. These findings, coupled with inconsistent estimates from our various sensitivity analyses, lead us to conclude that genetically determined neutrophil counts do not substantially influence risk of atrial fibrillation. One potential reason for the differences between our work and previous epidemiological studies is that our MR analysis evaluated how lifelong exposure to increased leukocyte counts affected risk of atrial fibrillation . In contrast, observational studies typically have limited follow-up and may focus on short-term effects of leukocyte counts on the risk of postoperative atrial fibrillation . This study has limitations worth considering. First, it was restricted to a population of European descent for the sake of genetic homogeneity, so its generalizability to other ethnic groups is unclear. Second, lymphocytes are a diverse population of cells that have distinct phenotypic and functional properties. The aggregated count of all lymphocytes is far from fully representing the heterogeneous changes of lymphocyte subpopulations. Future studies should examine specific subsets of circulating lymphocytes, such as through fluorescence-activated cell sorting. Third, we were unable to distinguish different subtype of each kind of arrhythmias in our analysis, due to the lack of detailed original GWAS data. Fourth, no MR analysis can entirely exclude the influence of pleiotropic effects. Nevertheless, the observed consistency of effect estimates across multiple sensitivity analyses implies minimal confounding and bias. Fifth, this study did not encompass ventricular tachycardia or ventricular fibrillation, as large-scale population-based GWAS summary statistics on ventricular arrhythmias are currently unavailable. Recruiting patients with ventricular fibrillation in the setting of acute myocardial infarction is challenging when compared to the ease of recruitment of atrial fibrillation patients . Existing GWAS primarily focus on electrophysiological parameters that highly correlated with ventricular tachyarrhythmia, such as PR interval , QT interval , or specific diseases like Brugada syndrome or long QT syndrome , which are predominantly characterized by ventricular arrhythmias. Sixth, the “all types of arrhythmias” analyzed in the study represent a collection of phenotypes. It may introduce composition bias. This is because the proportion of each arrhythmia in the dataset is unknown, and changes in the proportion can significantly impact the causal effects, thereby reducing reproducibility. Furthermore, if the causal effects of leukocytes are opposite on different types of arrhythmias, they may mutually cancel out, resulting in inaccurate findings. Conclusion In conclusion, our study provides strong evidence of a causal effect of genetically high lymphocyte count on the risk of atrioventricular block. We failed to find evidence supporting a causal effect of lymphocyte or neutrophil count on atrial fibrillation. Our results provide insights into the role of systemic immune changes in the pathogenesis of arrhythmias. The original contributions presented in the study are included in the article/ . Further inquiries can be directed to the corresponding authors. Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. YuC, JY and SH contributed to conception and design of the study. YuC, LJ, XZ, YaC, LC, FZ, ZL and TF performed the statistical analysis. YuC and LL wrote the first draft of the manuscript. YuC, LL, XZ and YaC wrote sections of the manuscript All authors contributed to the article and approved the submitted version. |
Development of a deep learning system for predicting biochemical recurrence in prostate cancer | 2fb7f87d-39f0-496f-a182-1c338ee0ac8b | 11812243 | Surgical Procedures, Operative[mh] | According to the Global Cancer Statistics 2020, prostate cancer (PCa) is a common malignancy among men . A majority of these patients undergo radical prostatectomy, either as their initial treatment choice or after a period of active surveillance. Prostate-specific antigen (PSA) is a protein produced by both cancerous and noncancerous tissue in the prostate. Its concentration in the blood is used to judge the presence of PCa. In a successful prostatectomy surgery, PSA concentration will mostly be undetectable (< 0.1 ng/mL) after 2–6 weeks. However, in 20%–40% of these patients, PSA levels will rise again after surgery, indicating biochemical recurrence (BCR) and suggesting the regrowth of PCa cells . BCR is a strong risk factor for subsequent metastases and mortality. The accurate prediction of patients prone to experiencing BCR before prostatectomy is crucial for determining the pre-surgery course of action. For example, more aggressive treatment options, such as additional chemotherapy, radiotherapy, hormone therapy, immunotherapy and prophylactic extended pelvic lymphadenectomy, should be considered for patients with a high risk of BCR. Prostate multi-core needle biopsy is the most reliable diagnostic method for patients suspected of having PCa, and is also one of the standard procedures before prostatectomy . A systematic review showed that the cancer detection rate is associated with the number of cores . Histopathological grading of prostate biopsy, along with digital rectal examination and PSA level, forms the basis of most preoperative prediction systems used in clinics . These systems can effectively estimate PCa progression, but have repeatedly been shown to have suboptimal prognostic and discriminatory performance, which is partly due to the subjective and non-specific nature of the core variables . For instance, Gleason grading was developed in the 1960s and has suboptimal interobserver reproducibility even among expert urologic pathologists . In recent years, whole slide image (WSI), generated by digitizing glass slide with histology samples, has seen a rise in popularity. The development of cancer always involves changes in cellular morphology and the microenvironment , therefore, there is a consensus among pathologists that WSI contains abundant cancer-related information . Additionally, deep learning (DL) based on convolutional neural network (CNN) has demonstrated excellent capabilities to obtain cancer-related information from WSI . DL has been widely applied in prostate pathology, including cancer detection , Gleason grading , genomic signatures prediction , and post-surgery BCR prediction . In this study, we developed a pre-surgery BCR prediction system based on DL and multiple instance learning (MIL) framework, using prostate multi-core needle biopsy. The system demonstrated excellent performance on the testing dataset, providing evidence for the potential contribution of AI in medical diagnosis. Data preparation This study incorporated two independent cohorts of patients who underwent multi-core needle biopsy diagnosis prior to prostatectomy for clinically localized PCa between January 1, 2018 and December 31, 2020. All the resources were comprehensively characterized, including patients' clinical and pathological data. None of the patients had received any preoperative treatment. All patients were investigated and followed up for a period ranging from 47 to 83 months until November 30, 2024. A total of 3092 hematoxylin and eosin (H&E) stained slides from 342 PCa patients were scanned by KFBIO PRO400 (Ningbo Konfoong Bioinformation Company, Zhejiang, China, 0.25um/pixel at 40 × magnification and 0.50um/pixel at 20 × magnification) at a resolution of 20x, sourced from Tianjin Medical University Cancer Institute and Hospital (TMUCH) and Tianjin Baodi Hospital (TBH). All patients underwent 10–16 cores needle biopsy according to the hospital guidelines. Pathologists, blinded to the diagnosis, reviewed these slide images and selected 5 WSIs for each patient. The selection criteria required that all 5 WSIs for each patient contain PCa tissue, and the WSIs with higher Gleason scores were preferentially selected. Thus, 1585 WSIs from 317 patients were selected for model training and testing. The digital pathology slides and clinical information of 254 patients treated at TMUCH were used as the training cohort, and 63 patients treated at TBH were used as the testing cohort in this study. This study was approved by the Ethics Committee of TMUCH (No. Ek2020074) and was conducted in accordance with the 1964 Helsinki Declaration and its subsequent amendments, or comparable ethical standards. Our Ethics Committees granted a waiver of informed consent. The clinical information of patients, including age, PSA values, primary and secondary Gleason scores, was collected from electronic surgical pathology reports. BCR was defined as the detection of two consecutive postoperative PSA values equal to or higher than 0.2 ng/mL. Meanwhile, other patients were confirmed to have non-BCR through telephone follow-up. The baseline data of patients before the radical prostatectomy period are shown in Table . Overview of the BCR prediction system The workflow of the BCR prediction system is shown in Fig. . The system is implemented across three stages, with each stage mapping to BCR prediction at the patch-level, WSI-level, and patient-level. In stage 1, cropped patches from WSIs are input into a pre-trained CNN model to predict the recurrence probability. Thus, this stage outputs the patch-level prediction. In stage 2, a Patch Likelihood Histogram (PALHI) pipeline and a Bag of Words (BoW) pipeline are used to extract the WSI-level features of all patches' prediction probabilities in each WSI . Meanwhile, the model based on PALHI and BoW methods can predict the recurrence probability of each WSI, thus this stage can output the WSI-level prediction. In stage 3, due to each patient involves 5 WSIs, patient-level features are generated by aggregating WSI-level features of each patient using pooling operation. These features are then combined with clinical characteristics and inputted into machine learning (ML) classifiers to predict patient-level recurrence risk. Data processing To address the challenge of handling large-scale digital images, we implemented a systematic pre-processing strategy. First, the WSIs were divided into 512 × 512 patches using a non-overlapping partitioning approach, strictly maintaining a resolution of 0.5 μm/pixel. Then, to ensure high-quality data, we removed the background images according to the pixel threshold and bright threshold. In addition, we used the Vahadane method to normalize the color of these patches. During the training process, we applied data augmentations, including random horizontal and vertical flipping, and for the testing process, we only used color normalization. Deep learning training Proposed DL process comprises three levels of predictions: patch-level, WSI-level and patient-level predictions, using CNN and MIL. For patch-level predictions, we implemented transfer learning to enhance the model's generalization across heterogeneous cohorts. This involved initializing the model's parameters with pretrained weights from the ImageNet dataset. Afterward, we fine-tuned the entire model using the training cohort dataset (254 samples) which has been annotated at the patient level, since the WSI area related to BCR in PCa could not be identified. Utilizing transfer learning, we evaluated the efficacy of various DL models, including Inception_v3, ResNet50, VGG19 and ResNet18, as shown in Table . The Inception_v3 demonstrated superior performance in terms of the area under the receiver operating characteristic curve (AUC) and accuracy, thus becoming the model selected in this study. To enhance generalization, in this study, we employed the cosine decay learning rate algorithm to set the learning rate. The learning rate is presented as follows: [12pt]{minimal} $$_{t}^{{}} = _{ }^{i} + ( {_{ }^{i} - _{ }^{i} } )( {1 + ( {}}} }}{{T_{i} }} } )} )$$ η t task - spec = η min i + 1 2 η max i - η min i 1 + cos T cur T i π [12pt]{minimal} $$_{ }^{i} = 0.01$$ η max i = 0.01 , [12pt]{minimal} $$_{ }^{i} = 0$$ η min i = 0 , [12pt]{minimal} $$T_{i} = 50$$ T i = 50 represent the maximum learning rate, the minimum learning rate, and the number of iteration epochs, respectively. For transfer learning algorithms, fine-tuning is essential as the backbone component already contains pre-trained parameters. Therefore, we fine-tuned the backbone component parameters when [12pt]{minimal} $$T_{{{}}} = T_{i}$$ T cur = 1 2 T i . Furthermore, the learning rate of the backbone component is defined as follows: [12pt]{minimal} $$_{t}^{backbone} = \{ {l} 0 & {{}{ 1mu} T_{cur} T_{i} } \\ {_{ }^{i} + ( {_{ }^{i} - _{ }^{i} } )( {1 + ( { }}{{T_{i} }} } )} )} & {{}{ 1mu} T_{cur}> T_{i} } \\ } .$$ η t backbone = 0 if T cur ≤ 1 2 T i η min i + 1 2 η max i - η min i 1 + cos T cur T i π if T cur > 1 2 T i Other hyperparameter configurations are as follows: Optimizer—SGD, Loss function—SoftMax cross-entropy, with a batch size of 32. Multi-instance learning process After DL training, we used the trained Inception_v3 model to carry out label predictions for all patches and obtained the corresponding probabilities for each patch. Afterward, we used the MIL method to aggregate the probability of all patches for each WSI as the feature vector of corresponding WSI. The MIL method used in this study was refined from the commonly used methods in pathological image analysis . It was composed of the PALHI pipeline and the BoW pipeline. The PALHI pipeline used histogram to represent the distribution of patch probabilities in the WSI. The distribution of all patch probabilities in different intervals of the histogram constitutes the feature vector of the corresponding WSI. The BoW pipeline used TF-IDF variable mapping for each patch, and these variables constituted a TF-IDF feature vector to represent corresponding WSI . Term frequency-inverse document frequency (TF-IDF) is a statistical method commonly used in information retrieval, combining the term frequency (TF) and the inverse document frequency (IDF). The TF measures the frequency of each patch's feature in a single WSI, and the IDF assigns a weight to the feature based on the rarity of each feature across the entire WSIs. By multiplying the TF and IDF values, TF-IDF assigns a higher weight to patches that are both frequent within a particular WSI (high TF) and relatively rare across the whole set of WSIs (high IDF), thus effectively identifying the significance of features by quantifying their importance based on their frequency within and across WSIs. Feature vectors obtained by PALHI and the BoW pipelines were then spliced together and used for training classifiers to predict the label in each WSI. Through the deployment of PALHI and BoW pipelines, we integrated the initially scattered patch-level predictions and subsequently obtained WSI-level features, which were used for subsequent analysis operations, including t-distributed stochastic neighbor embedding (t-SNE) features projection and training ML classifiers. Signature building After constructing WSI-level features using patch-level predictions, probability histograms, and TF-IDF features in combination, we aggregated them into patient-level features through pooling operation. Consequently, final patient representations were formed by integrating patient-level pathology features with clinical features. For feature selection, initially, all feature lines were standardized with the z-score standardization method. The z-score standardization method is a technique that transforms data points in a dataset to have a mean of 0 and a standard deviation of 1, and its formula is [12pt]{minimal} $$z=$$ z = x - μ σ where [12pt]{minimal} $$x$$ x is the original data point, [12pt]{minimal} $$$$ μ is the mean of the dataset, and [12pt]{minimal} $$$$ σ is the standard deviation of the dataset. Next, Pearson's rank correlation coefficient was employed to compute the correlation among features. When there is a monotonically increasing or monotonically decreasing relationship between two features, this coefficient can quantify the closeness of such a relationship. Among the features, if the correlation coefficient between any two features was greater than 0.9, only one of these features was retained. Finally, the least absolute shrinkage and selection operator (LASSO) regression was used for feature selection, since it can shrink the coefficients of some unimportant features to zero, thereby achieving the purpose of variable selection and reducing the risk of model overfitting. Model evaluation After signature building, we utilized machine learning algorithms, including multilayer perceptron (MLP), logistic regression (LR), support vector machine (SVM) and Random Forest, to develop classifiers. The MLP model was a fully connected 3-layer perceptron, comprising 128, 64, and 32 hidden nodes, respectively. The SVM model used the radial basis function kernel function, while the other parameters were kept as default. The Random Forest model set the value of n_estimators to 10. All of these models employed the implementation of scikit-learn. Scikit-learn is a popular Python library and integrates various ML algorithms, allowing users to directly call these algorithms for different ML tasks . The receiver operating characteristic (ROC) curve was employed to validate the performance of the Inception_v3 model in region identification at the patch-level. Probability heatmaps were used for WSI-level visual evaluation after the MIL process. For the BCR prediction model, we employed the AUC value as the performance metric, along with accuracy, sensitivity and specificity calculations. Its clinical practicability was evaluated using decision curve analysis (DCA). This study incorporated two independent cohorts of patients who underwent multi-core needle biopsy diagnosis prior to prostatectomy for clinically localized PCa between January 1, 2018 and December 31, 2020. All the resources were comprehensively characterized, including patients' clinical and pathological data. None of the patients had received any preoperative treatment. All patients were investigated and followed up for a period ranging from 47 to 83 months until November 30, 2024. A total of 3092 hematoxylin and eosin (H&E) stained slides from 342 PCa patients were scanned by KFBIO PRO400 (Ningbo Konfoong Bioinformation Company, Zhejiang, China, 0.25um/pixel at 40 × magnification and 0.50um/pixel at 20 × magnification) at a resolution of 20x, sourced from Tianjin Medical University Cancer Institute and Hospital (TMUCH) and Tianjin Baodi Hospital (TBH). All patients underwent 10–16 cores needle biopsy according to the hospital guidelines. Pathologists, blinded to the diagnosis, reviewed these slide images and selected 5 WSIs for each patient. The selection criteria required that all 5 WSIs for each patient contain PCa tissue, and the WSIs with higher Gleason scores were preferentially selected. Thus, 1585 WSIs from 317 patients were selected for model training and testing. The digital pathology slides and clinical information of 254 patients treated at TMUCH were used as the training cohort, and 63 patients treated at TBH were used as the testing cohort in this study. This study was approved by the Ethics Committee of TMUCH (No. Ek2020074) and was conducted in accordance with the 1964 Helsinki Declaration and its subsequent amendments, or comparable ethical standards. Our Ethics Committees granted a waiver of informed consent. The clinical information of patients, including age, PSA values, primary and secondary Gleason scores, was collected from electronic surgical pathology reports. BCR was defined as the detection of two consecutive postoperative PSA values equal to or higher than 0.2 ng/mL. Meanwhile, other patients were confirmed to have non-BCR through telephone follow-up. The baseline data of patients before the radical prostatectomy period are shown in Table . The workflow of the BCR prediction system is shown in Fig. . The system is implemented across three stages, with each stage mapping to BCR prediction at the patch-level, WSI-level, and patient-level. In stage 1, cropped patches from WSIs are input into a pre-trained CNN model to predict the recurrence probability. Thus, this stage outputs the patch-level prediction. In stage 2, a Patch Likelihood Histogram (PALHI) pipeline and a Bag of Words (BoW) pipeline are used to extract the WSI-level features of all patches' prediction probabilities in each WSI . Meanwhile, the model based on PALHI and BoW methods can predict the recurrence probability of each WSI, thus this stage can output the WSI-level prediction. In stage 3, due to each patient involves 5 WSIs, patient-level features are generated by aggregating WSI-level features of each patient using pooling operation. These features are then combined with clinical characteristics and inputted into machine learning (ML) classifiers to predict patient-level recurrence risk. To address the challenge of handling large-scale digital images, we implemented a systematic pre-processing strategy. First, the WSIs were divided into 512 × 512 patches using a non-overlapping partitioning approach, strictly maintaining a resolution of 0.5 μm/pixel. Then, to ensure high-quality data, we removed the background images according to the pixel threshold and bright threshold. In addition, we used the Vahadane method to normalize the color of these patches. During the training process, we applied data augmentations, including random horizontal and vertical flipping, and for the testing process, we only used color normalization. Proposed DL process comprises three levels of predictions: patch-level, WSI-level and patient-level predictions, using CNN and MIL. For patch-level predictions, we implemented transfer learning to enhance the model's generalization across heterogeneous cohorts. This involved initializing the model's parameters with pretrained weights from the ImageNet dataset. Afterward, we fine-tuned the entire model using the training cohort dataset (254 samples) which has been annotated at the patient level, since the WSI area related to BCR in PCa could not be identified. Utilizing transfer learning, we evaluated the efficacy of various DL models, including Inception_v3, ResNet50, VGG19 and ResNet18, as shown in Table . The Inception_v3 demonstrated superior performance in terms of the area under the receiver operating characteristic curve (AUC) and accuracy, thus becoming the model selected in this study. To enhance generalization, in this study, we employed the cosine decay learning rate algorithm to set the learning rate. The learning rate is presented as follows: [12pt]{minimal} $$_{t}^{{}} = _{ }^{i} + ( {_{ }^{i} - _{ }^{i} } )( {1 + ( {}}} }}{{T_{i} }} } )} )$$ η t task - spec = η min i + 1 2 η max i - η min i 1 + cos T cur T i π [12pt]{minimal} $$_{ }^{i} = 0.01$$ η max i = 0.01 , [12pt]{minimal} $$_{ }^{i} = 0$$ η min i = 0 , [12pt]{minimal} $$T_{i} = 50$$ T i = 50 represent the maximum learning rate, the minimum learning rate, and the number of iteration epochs, respectively. For transfer learning algorithms, fine-tuning is essential as the backbone component already contains pre-trained parameters. Therefore, we fine-tuned the backbone component parameters when [12pt]{minimal} $$T_{{{}}} = T_{i}$$ T cur = 1 2 T i . Furthermore, the learning rate of the backbone component is defined as follows: [12pt]{minimal} $$_{t}^{backbone} = \{ {l} 0 & {{}{ 1mu} T_{cur} T_{i} } \\ {_{ }^{i} + ( {_{ }^{i} - _{ }^{i} } )( {1 + ( { }}{{T_{i} }} } )} )} & {{}{ 1mu} T_{cur}> T_{i} } \\ } .$$ η t backbone = 0 if T cur ≤ 1 2 T i η min i + 1 2 η max i - η min i 1 + cos T cur T i π if T cur > 1 2 T i Other hyperparameter configurations are as follows: Optimizer—SGD, Loss function—SoftMax cross-entropy, with a batch size of 32. After DL training, we used the trained Inception_v3 model to carry out label predictions for all patches and obtained the corresponding probabilities for each patch. Afterward, we used the MIL method to aggregate the probability of all patches for each WSI as the feature vector of corresponding WSI. The MIL method used in this study was refined from the commonly used methods in pathological image analysis . It was composed of the PALHI pipeline and the BoW pipeline. The PALHI pipeline used histogram to represent the distribution of patch probabilities in the WSI. The distribution of all patch probabilities in different intervals of the histogram constitutes the feature vector of the corresponding WSI. The BoW pipeline used TF-IDF variable mapping for each patch, and these variables constituted a TF-IDF feature vector to represent corresponding WSI . Term frequency-inverse document frequency (TF-IDF) is a statistical method commonly used in information retrieval, combining the term frequency (TF) and the inverse document frequency (IDF). The TF measures the frequency of each patch's feature in a single WSI, and the IDF assigns a weight to the feature based on the rarity of each feature across the entire WSIs. By multiplying the TF and IDF values, TF-IDF assigns a higher weight to patches that are both frequent within a particular WSI (high TF) and relatively rare across the whole set of WSIs (high IDF), thus effectively identifying the significance of features by quantifying their importance based on their frequency within and across WSIs. Feature vectors obtained by PALHI and the BoW pipelines were then spliced together and used for training classifiers to predict the label in each WSI. Through the deployment of PALHI and BoW pipelines, we integrated the initially scattered patch-level predictions and subsequently obtained WSI-level features, which were used for subsequent analysis operations, including t-distributed stochastic neighbor embedding (t-SNE) features projection and training ML classifiers. After constructing WSI-level features using patch-level predictions, probability histograms, and TF-IDF features in combination, we aggregated them into patient-level features through pooling operation. Consequently, final patient representations were formed by integrating patient-level pathology features with clinical features. For feature selection, initially, all feature lines were standardized with the z-score standardization method. The z-score standardization method is a technique that transforms data points in a dataset to have a mean of 0 and a standard deviation of 1, and its formula is [12pt]{minimal} $$z=$$ z = x - μ σ where [12pt]{minimal} $$x$$ x is the original data point, [12pt]{minimal} $$$$ μ is the mean of the dataset, and [12pt]{minimal} $$$$ σ is the standard deviation of the dataset. Next, Pearson's rank correlation coefficient was employed to compute the correlation among features. When there is a monotonically increasing or monotonically decreasing relationship between two features, this coefficient can quantify the closeness of such a relationship. Among the features, if the correlation coefficient between any two features was greater than 0.9, only one of these features was retained. Finally, the least absolute shrinkage and selection operator (LASSO) regression was used for feature selection, since it can shrink the coefficients of some unimportant features to zero, thereby achieving the purpose of variable selection and reducing the risk of model overfitting. After signature building, we utilized machine learning algorithms, including multilayer perceptron (MLP), logistic regression (LR), support vector machine (SVM) and Random Forest, to develop classifiers. The MLP model was a fully connected 3-layer perceptron, comprising 128, 64, and 32 hidden nodes, respectively. The SVM model used the radial basis function kernel function, while the other parameters were kept as default. The Random Forest model set the value of n_estimators to 10. All of these models employed the implementation of scikit-learn. Scikit-learn is a popular Python library and integrates various ML algorithms, allowing users to directly call these algorithms for different ML tasks . The receiver operating characteristic (ROC) curve was employed to validate the performance of the Inception_v3 model in region identification at the patch-level. Probability heatmaps were used for WSI-level visual evaluation after the MIL process. For the BCR prediction model, we employed the AUC value as the performance metric, along with accuracy, sensitivity and specificity calculations. Its clinical practicability was evaluated using decision curve analysis (DCA). Performance evaluation and visualization The performance of the BCR prediction system was evaluated at the patch-level, WSI-level, and patient-level using ROC curves. In the patch-level prediction, the AUC for the Inception_v3 architecture was 0.968 for the training dataset and 0.803 for the testing dataset, as shown in Fig. A. In the WSI-level prediction, all ML classifiers demonstrated improved performance on the testing dataset compared to the patch-level prediction, as shown in the ROC curves, and the RandomForest classifier achieved a higher AUC value of 0.848, as shown in Fig. B. In the patient-level prediction, these classifiers' performances were further improved after both average pooling and maximum pooling feature aggregation, and the average pooling operation displayed better performance, yielding AUC values of 0.908 with MLP and LR classifiers, as shown in Fig. C and D. These results demonstrated the considerable effectiveness of our feature aggregation approach. Probability maps were utilized to assess the patch-level prediction outcomes, as shown in Fig. A. It could be observed that, compared to non-BCR patients, the WSIs of BCR cases exhibited a greater number of patches with higher probability values approaching 1. Thus, the MIL model combining BoW with PALHI method was utilized in this study. Another reason for choosing the MIL method is that it does not require manual pixel-level annotation, which is crucial for building a BCR prediction model, because even experienced pathologists cannot accurately match the pathological morphology of H&E slices with BCR. Gradient-weighted Class Activation Mapping (Grad-CAM) is a technique that creates maps to display the localization of different classes by visualizing the gradients entering the final convolutional layer of a neural network. Figure B shows the utilization of Grad-CAM in displaying the activation of the last convolutional layer for BCR prediction evaluation. This visualization highlighted the regions of the input image that significantly contributed to the prediction, offering valuable insights into the model's decision-making process. To comprehend the enhancement in performance at the patient-level prediction, we employed the t-SNE algorithm. WSI-level features were extracted from patches' prediction probabilities by PALHI and BoW pipelines, and their t-SNE projections were shown in Fig. A. The inter-class distance and intra-class distance were calculated to quantitatively describe the changes in features from WSI-level to patient-level, as shown in Fig. B. The t-SNE projection of patient-level features which were aggregated by maximum pooling was shown in Fig. C, and the t-SNE projection of patient-level features which were aggregated by average pooling was shown in Fig. D. Clear differentiation between BCR and non-BCR cases was observed in both WSI and patient level t-SNE projections. It indicated a significant decrease in the intra-class distance of both BCR and non-BCR cases after aggregating features with pooling operations (decrease from 25.94 & 32.84 to 9.47 & 13.26), which correlated with higher AUC values at the patient level. In addition, the inter-class distance of the average pooling projection was larger than that of the maximum pooling (35.64 vs. 34.23), which correlated with the higher AUC value for average pooling. Therefore, the average pooling operation was employed in our BCR prediction system. The impact of WSIs quantity on prediction performance The impact of the number of WSI per patient on the efficacy of models was evaluated and shown in Fig. E. For models utilizing MLP, LR, and SVM classifiers, their AUC values increased as the number of WSIs per patient increased, and for all classifiers, the maximum AUC value was achieved when all WSIs of each patient were involved in training. To further explain this phenomenon, Table listed the AUC, accuracy, sensitivity, and specificity of these models on the testing cohort. It can be observed that when selecting one WSI per patient for training, the MIL model exhibited higher accuracy and specificity values, and when using multi-WSIs per patient for training, the MIL model exhibited higher AUC and sensitivity values. For each patient, it seemed that several WSIs contained more crucial information and are assigned higher weights in the final decision. It also indicated that using all WSIs per patient for training led to the model achieving the highest AUC and moderate levels of accuracy, sensitivity, and specificity. Therefore, we concluded that increasing the number of WSIs for each patient can improve the generalization performance of the MIL model, and features aggregated from all 5 WSIs of each patient were used for model training. The impact of clinical features on prediction performance The ROC curves of various classifiers trained using pathology images and clinical features on the testing cohort were shown in Fig. A. The clinical information included patient age, PSA value, primary Gleason score, and secondary Gleason score. The ROC curves of classifiers trained by clinical information were shown in Fig. B. It could be observed that, compared to clinical features, the pathological image features extracted using CNN and MIL methods could significantly enhance the model efficacy. The MLP classifier trained on combined pathological and clinical features achieved the highest AUC value in this study, reaching 0.911(95%CI: 0.840–0.982). The corresponding values of accuracy, sensitivity, specificity and F1-score for various classifiers were shown in Table . For all classifiers trained by pathological and clinical features, decision curve analyses demonstrated good clinical benefits, as shown in Fig. C-F. The performance of the BCR prediction system was evaluated at the patch-level, WSI-level, and patient-level using ROC curves. In the patch-level prediction, the AUC for the Inception_v3 architecture was 0.968 for the training dataset and 0.803 for the testing dataset, as shown in Fig. A. In the WSI-level prediction, all ML classifiers demonstrated improved performance on the testing dataset compared to the patch-level prediction, as shown in the ROC curves, and the RandomForest classifier achieved a higher AUC value of 0.848, as shown in Fig. B. In the patient-level prediction, these classifiers' performances were further improved after both average pooling and maximum pooling feature aggregation, and the average pooling operation displayed better performance, yielding AUC values of 0.908 with MLP and LR classifiers, as shown in Fig. C and D. These results demonstrated the considerable effectiveness of our feature aggregation approach. Probability maps were utilized to assess the patch-level prediction outcomes, as shown in Fig. A. It could be observed that, compared to non-BCR patients, the WSIs of BCR cases exhibited a greater number of patches with higher probability values approaching 1. Thus, the MIL model combining BoW with PALHI method was utilized in this study. Another reason for choosing the MIL method is that it does not require manual pixel-level annotation, which is crucial for building a BCR prediction model, because even experienced pathologists cannot accurately match the pathological morphology of H&E slices with BCR. Gradient-weighted Class Activation Mapping (Grad-CAM) is a technique that creates maps to display the localization of different classes by visualizing the gradients entering the final convolutional layer of a neural network. Figure B shows the utilization of Grad-CAM in displaying the activation of the last convolutional layer for BCR prediction evaluation. This visualization highlighted the regions of the input image that significantly contributed to the prediction, offering valuable insights into the model's decision-making process. To comprehend the enhancement in performance at the patient-level prediction, we employed the t-SNE algorithm. WSI-level features were extracted from patches' prediction probabilities by PALHI and BoW pipelines, and their t-SNE projections were shown in Fig. A. The inter-class distance and intra-class distance were calculated to quantitatively describe the changes in features from WSI-level to patient-level, as shown in Fig. B. The t-SNE projection of patient-level features which were aggregated by maximum pooling was shown in Fig. C, and the t-SNE projection of patient-level features which were aggregated by average pooling was shown in Fig. D. Clear differentiation between BCR and non-BCR cases was observed in both WSI and patient level t-SNE projections. It indicated a significant decrease in the intra-class distance of both BCR and non-BCR cases after aggregating features with pooling operations (decrease from 25.94 & 32.84 to 9.47 & 13.26), which correlated with higher AUC values at the patient level. In addition, the inter-class distance of the average pooling projection was larger than that of the maximum pooling (35.64 vs. 34.23), which correlated with the higher AUC value for average pooling. Therefore, the average pooling operation was employed in our BCR prediction system. The impact of the number of WSI per patient on the efficacy of models was evaluated and shown in Fig. E. For models utilizing MLP, LR, and SVM classifiers, their AUC values increased as the number of WSIs per patient increased, and for all classifiers, the maximum AUC value was achieved when all WSIs of each patient were involved in training. To further explain this phenomenon, Table listed the AUC, accuracy, sensitivity, and specificity of these models on the testing cohort. It can be observed that when selecting one WSI per patient for training, the MIL model exhibited higher accuracy and specificity values, and when using multi-WSIs per patient for training, the MIL model exhibited higher AUC and sensitivity values. For each patient, it seemed that several WSIs contained more crucial information and are assigned higher weights in the final decision. It also indicated that using all WSIs per patient for training led to the model achieving the highest AUC and moderate levels of accuracy, sensitivity, and specificity. Therefore, we concluded that increasing the number of WSIs for each patient can improve the generalization performance of the MIL model, and features aggregated from all 5 WSIs of each patient were used for model training. The ROC curves of various classifiers trained using pathology images and clinical features on the testing cohort were shown in Fig. A. The clinical information included patient age, PSA value, primary Gleason score, and secondary Gleason score. The ROC curves of classifiers trained by clinical information were shown in Fig. B. It could be observed that, compared to clinical features, the pathological image features extracted using CNN and MIL methods could significantly enhance the model efficacy. The MLP classifier trained on combined pathological and clinical features achieved the highest AUC value in this study, reaching 0.911(95%CI: 0.840–0.982). The corresponding values of accuracy, sensitivity, specificity and F1-score for various classifiers were shown in Table . For all classifiers trained by pathological and clinical features, decision curve analyses demonstrated good clinical benefits, as shown in Fig. C-F. PCa is the leading cause of cancer-associated disability due to the negative effects of over-treatment and under-treatment, and it is also a major cause of cancer death in men . Radical prostatectomy is pivotal in the treatment of PCa, and its procedure directly affects the prognosis of patients. The BCR of PCa indicates tumor progression and serves as a crucial basis for formulating treatment procedures. This study has developed a preoperative BCR prediction system for PCa, aiming to provide valuable reference for guiding the course of radical prostatectomy procedures. Since it is difficult to annotate regions or patches related to BCR on pathological slides, we assigned each WSI a single overarching label, instead of manually annotating each region or patch of a slide, which means the cropped patches of each WSI shared the label of the corresponding WSI. The DL models then use WSI-level annotation information to identify regions of interest or use this to classify the disease state of the slide. This approach, combing DL and the MIL pipelines, has demonstrated promising performance in tumor region identification , Gleason scoring , tumor purity prediction , and morphological feature segmentation of PCa. Thus, we constructed a BCR prediction system for PCa biopsy tissue based on the DL and the MIL models, and demonstrated enhanced performance compared to current systems. Several BCR prediction systems for PCa have been developed, and most of them are trained using clinical variables, radiological variables , or macroscopic histological variables such as Gleason score, quantitative nuclear grade, and seminal vesical invasion. With the development of AI, DL models that can obtain microscopic information and extract high-dimensional features have shown excellent performance in many medical tasks. Eminaga et al. and Pinckaers et al. trained DL models using H&E-stained tissue microarrays (TMA) to predict BCR. Eminaga et al. constructed the DL model using the PlexusNET and grid search methods, and the AUC on the testing cohort was 0.71 (95% CI: 0.67–0.75). Pinckaers et al. constructed the DL model based on Resnet50 backbone, and hazard ratios in univariate and multivariate analyses were 5.78 (95% CI: 2.44–13.72; p < 0.005) and 3.02 (95% CI: 1.10–8.29; p = 0.03), respectively. Huang et al. trained a DL model using WSIs of radical prostatectomy specimens to predict BCR, and the AUC on the testing cohort was 0.78. Our system trained a DL model based on the InceptionV3 backbone using WSIs of preoperative biopsy tissue, and its AUC value at WSI level in the testing cohort was observed to reach 0.848 (95% CI: 0.802–0.894). MIL is able to improve the performance of DL models . MIL is a machine learning paradigm where training data is grouped into bags, and the labels of the bags are determined by the labels of their instances. Although MIL is widely adopted in computer vision, its utilization in prostate histology images remains scarce. The main premise of most weakly-supervised methods for histological analysis requires pooling the features from WSI patches, under the MIL framework . In this study, we combined MIL with histogram methods to extract prediction probabilities features of all patches in each WSI as the WSI-level features, and then, we aggregated the WSI-level features to obtain the patient-level features using pooling operations. Each step here improved the performance of the system. Through t-SNE dimensionality reduction, we found that the reason why the AUC value after the average pooling operation was higher than after the maximum pooling operation was that average pooling generated larger inter-class distances. This could be because the average pooling method better represents the overall trend of BCR in patients, hence exhibiting superior performance. The pathology diagnosis of PCa needle biopsy is the gold standard for confirming PCa before radical prostatectomy. Increasing the core number of prostate biopsies can enhance the cancer detection rate, and related studies have suggested an optimal number of 10–12 cores . In this study, experienced pathologists excluded WSI without cancerous regions and then selected five WSIs for each patient. We examined the impact of the number of WSIs on BCR prediction and observed that increasing the number of WSIs improved the overall performance of the model, a trend also reported in other literatures . Further research revealed that this was because the features of individual WSIs for each patient exhibited higher weights in the model, suggesting that these WSIs contained information more closely related to the possibility of recurrence. The phenomenon of several high-weight WSIs did not become diluted with the increase in the number of WSIs in our system. Therefore, we believe that increasing the number of WSIs for each patient can improve the generalization performance of BCR prediction models for PCa. We performed visualization of high-risk BCR areas in prostate biopsy tissue slides, and found DL features were significantly correlated with pathological findings, indicating the interpretability of DL models based on WSI. Using the Grad-CAM-based images, we observed that in the tumor region, the sieve arrangement area and suspicious vascular gap infiltration area indicated a higher likelihood of BCR, consistent with the current consensus . Areas of neural invasion, necrosis, and sharp-angle glandular areas also indicated a higher likelihood of BCR. In the non-tumor areas, gland morphologies differed in non-cancer regions between patients who experienced recurrence and those who did not. Recent studies based on prostate digital pathology have reported similar findings . It should be clarified that the proposed system refrained from using the tumor-stage (T-stage) information as system parameters in view of the following considerations, notwithstanding the fact that it had been collected beforehand. Foremost, we employed clinical indices that could be obtained before radical prostatectomy as system parameters so that the proposed system could predict the BCR risk immediately after obtaining either preoperative biopsy WSI or postoperative WSI. However, in many cases, the pathology tumor stage (pT-stage) is reported after radical prostatectomy has been completed. Secondly, we preferred to select the clinical variables that could be automatically obtained or generated as the system inputs, such as the Gleason score which can be provided by the PAIGE Prostate AI product after scanning the H&E-stained slides. However, the T-stage information still relies on surgeons' provision, which restricts the system's independent diagnostic capability. Furthermore, we also attempted to construct a system with T-stage information but it failed to improve the performance (Supplementary Table 1). Initially, the clinical tumor stage (cT-stage) was collected, but the preponderance of cases presented with a T2 stage, rendering it statistically inconsequential. Subsequently, the more accurate pT-stage was used, but it was excluded during feature selection process, which was carried out using Pearson's correlation coefficient and LASSO (Supplementary Fig. 1). Therefore, after weighing the improvement of system efficacy brought by the T-stage (cT and pT) information against the costs that the system has to bear, we did not include the T stage as a system input parameter. However, this research has limitations. All patients selected for this research had at least five biopsy cores containing PCa tissue, which introduced selection bias in patient inclusion. In addition, this research collected a retrospective cohort, and the prospective design would strengthen the findings of the study. In summary, this study developed a DL system using digital pathology slides from prostate multi-core needle biopsies to predict PCa recurrence before radical prostatectomy. Due to the uncertainty in obtaining cancer tissue in biopsies, the proposed system can accept any number of WSIs as input. However, it should be noted that the system's performance tends to decline when the number of inputs WSIs decreases. The predictive result can provide valuable reference for guiding radical prostatectomy procedures, such as considering more aggressive treatment approaches or hormone and immune therapy for high-risk patients. The system has demonstrated satisfactory performance in the testing cohort and the potential to produce favorable clinical benefits. Supplementary Material 1. |
Optical Imaging-Based Guidance of Viral Microinjections and Insertion of a Laminar Electrophysiology Probe Into a Predetermined Barrel in Mouse Area S1BF | ab8fbba0-e020-4346-ac07-3d2b04eace76 | 8158817 | Physiology[mh] | Optogenetics activates light-sensitive ion channels – or pumps – termed opsins at a physiologically relevant, millisecond-scale on/off kinetics . Depending on whether they allow cations to cross down their gradient or they pump anions or protons across the cell membrane, opsins can activate or inactivate , respectively, specific populations of neurons. For a wide range of experimental objectives and hypothesis, optogenetics can be combined with readout techniques to measure in vivo neural activity, such as extracellular electrophysiology recordings , functional imaging techniques such as functional magnetic resonance imaging (fMRI) or intrinsic optical imaging , and behavioral observations . To study cortical processing at the scale of cortical columns, it is important to optimize the opsin gene introduction into neurons around a predetermined, small cortical module and the readout from such a module. Selecting the cortical sites for microinjections and for inserting the recording electrodes is commonly done by using stereotaxic coordinates referenced from structural brain atlases . This approach has been commonly used and optimized for applying optogenetics in rodents . However, atlas-based positioning of microinjections and electrodes provides only an approximation of the true locations of functional modules, as there can be significant inter-individual (between-subject) variation . This is especially problematic for small functional modules such as cortical columns with diameters as small as 200 – 300 microns. For example, maps of cortical columns for the same functional feature from the same cortical area in two different individuals may feature two different organizations: a radial pinwheel organization or a linear organization . Therefore, localizing an insertion with high precision with respect to cortical columns cannot be based solely on stereotactic coordinates. A different method used for guiding the insertion of an electrode prior to recording from a functional module is based on multiple insertions of an electrode to sparsely sample the responses from the region of interest. Previous studies located barrels in the rodent primary somatosensory barrel field (S1BF) by systematically inserting electrodes in a trial-and-error approach while administering a stimulus to characterize the cortical column properties and performing post hoc histology to validate the insertion site . However, this method takes a long time to perform, it can damage the cortex before the experiment has even begun, and it gives only a partial view and sparse sampling of a small area of barrels and septa. The purpose of the method we present here is to enable high-precision targeting of injections and neurophysiological recordings relative to small functional modules in rodents. To this end, we have devised a protocol to guide microinjections and electrode insertions more efficiently and more precisely than the methods described above. Based on stimulus-evoked hemodynamic responses imaged with Optical Imaging of Intrinsic Signals (OI-IS) , the protocol allows guiding the insertions of microinjection pipettes and/or recording electrodes around or into small functional modules. Optical Imaging of Intrinsic Signal primarily measures the local changes in the content of deoxy-hemoglobin (deoxy-Hb), oxy-Hb, and the total volume of Hb elicited by neural activation . These changes cause changes in the absorption of light of specific wavelengths shone onto the surface of the cortex. As we will demonstrate, the results can be used for several steps in an optogenetics experiment. They can be used for guiding viral microinjections around a small target area as was previously demonstrated in monkey area V1 , optimizing the photostimulation used for optogenetics as described by , and guiding electrophysiology electrode to a functional module as small as a single barrel with a diameter of 200 microns. Regardless of the brain region to investigate, the principle is the same: apply stimuli known to activate the functional module and obtain a spatially mapped stimulus-activated hemodynamic response. The response amplitude needs to be sufficient to create visible spatial contrast between modules that respond preferentially to the specific stimulus and other modules in the area. Throughout the text, we will use the terms ‘targeted module,’ ‘pre-defined module,’ or ‘targeted barrel’ to refer to the small stimulus-activated region around which we aim to perform microinjections or into which we guide the electrode insertion. Optical Imaging of Intrinsic Signals resolves fine-scale modules showing hemodynamic responses that correlate with neuronal responses . The imaging can be performed with a low degree of invasiveness – through the intact (in mice) or thinned skull (in mice and rats), which is optimal for survival experiments. Optical Imaging of Intrinsic Signals-based guidance of electrode insertions to small functional modules was previously introduced in large animals . In rats, OI-IS can localize individual cortical columns and barrels in area S1 . OI-IS has recently gained ground as a means of localizing cortical targets for optogenetics manipulation and investigation . This targeting functionality using OI-IS resembles previous studies that localized cortical functional columns in non-human primates, with the purpose of recording from them . However, only one article has described the OI-IS as explicitly tailored and designed to guide optogenetics viral microinjections . Our paper presents detailed methods for OI-IS-based guidance of optogenetics viral microinjections close to – and around a predetermined small functional module and extends the OI-IS-based guidance to the readout/recording from within such a module. Our current study focuses on the guidance of microinjections of viral vectors around a small functional module in the rodent cortex and – following an incubation period – the guidance of an electrode insertion into a pre-defined module for electrophysiology recordings. These methods also allow the user to visualize the spatial spread of the optogenetics photostimulation and – at higher magnification – to estimate the cortical depth of the electrode contacts by imaging the upper recording contacts visible outside of the cortex . We verify that the method indeed results in the insertion of the electrode into the targeted module by visualizing the insertion site in images of the histology-processed tissue. Overall, the methods we describe allow for precise and consistent functional localization of small cortical structures with a minimal degree of invasiveness as required for optogenetics experiments.
(1) Pre-surgery Preparation All procedures were approved by the animal care committees of the Montreal Neurological Institute and McGill University and were carried out in accordance with the guidelines of the Canadian Council on Animal Care. Adult C57BL/6 10–15 weeks old female and male mice were used for all experiments. The choice of mice, and their genotype and phenotype must be made judiciously according to the specific experimental needs. A list of equipment items and materials commonly used in the experiments we describe is provided in . Before experiments, sterilize surgical instruments using a hot bead sterilizer (Germinator 500, Stoelting, IL, United States) or by autoclaving. Apply aseptic protocols to the surgery and recovery areas. (1.1) Induce and then systematically maintain an appropriate plane of anesthesia and analgesia for the surgical procedure. We use ‘Mouse Cocktail’ combination of ketamine 80–100 mg/kg, xylazine 10 mg/kg and acepromazine 2.5–3 mg/kg, injected I.P. to induce a surgical plane of anesthesia, followed by ketamine 80–100 mg/kg and xylazine 10 mg/kg to maintain anesthesia . For analgesia, we inject an initial one-time bolus of carprofen 5–10 mg/kg subcutaneously . To verify the surgical level of anesthesia, check for the absence of whisking and withdrawal reflex during a hindpaw painful pinch, and the absence of blinking upon eye contact (to be done while also constantly hydrating the cornea with a protective ophthalmic ointment). In addition, monitor the heartbeat, and make sure the respiration is regular with no signs of gasping . The anesthetics used should maintain neurophysiological activity and neurovascular coupling as much as possible unchanged. For this, a light plane of anesthesia during the recording sessions must be kept constant by systematically monitoring the vital signs and reflexes, as well as the electrophysiology readout . Any systematic increases in the heart beat or respiration rate must be counteracted by additional low doses of injectable anesthetic. Conversely, if the vital measures decrease and the spontaneous electrophysiological activity is visibly poor, provide the appropriate antagonist . (1.2) If using a piezoelectric whisker stimulation, tape or cut away all the same-side whiskers that will not be stimulated during the experiment. Use the surgical microscope to identify these whiskers and ensure their cutting. (1.3) Position the animal in a small-animal stereotaxic frame (David Kopf Instruments, CA, United States) in a manner consistent with the conventions of the reference atlas, and provide free-flowing oxygen via a nose cone . During electrophysiology recordings, switch to a mixture of 70% medical air and 30% oxygen. To reduce discomfort, use non-penetrating ear bars, covered with a drop of Xylocaine ointment (Aspen Pharmacare Canada Inc., ON, Canada). (2) Stereotaxic Surgery and Skull Thinning or Craniotomy (2.1) Cut the skin longitudinally along the midline with a scalpel and retract it laterally with a clamp. Remove soft tissue and dry off the exposed skull surface using cotton swabs. Administer topical epinephrine 1 mg/mL (Epiclor, McCarthy & Sons Service, AB, Canada) sparingly or sterile isotonic 0.9% NaCl saline in case of muscle or bone bleeding. (2.2) Flush the surgical site with small amounts of topical lidocaine hydrochloride 2% (Wyeth, NJ, United States). As soon as the bone has been pierced, do not use lidocaine nor epinephrine, as they will modify the animal’s physiology and brain state. Instead, use sterile isotonic 0.9% NaCl saline (Baxter Healthcare Corporation, IL, United States) or preferably Hanks’ Balanced Salt solution (HBSS) (MilliporeSigma Canada Co., ON, Canada) to thoroughly clean the surgical site. Because bleeding can impact the quality of the OI-IS, any sources of bleeding must be controlled immediately using persistent flushing with HBSS and absorbing the mixture of blood and HBSS with cotton swabs or Sugi cellulose absorbent triangles (Kettenbach GmbH & Co. KG, Germany) without ever touching the actual brain surface or the dura mater. (2.3) Find on the skull the bregma, the rostrocaudal and mediolateral coordinates for the cortical region of interest . (2.4) Drill the cranium with a fine micro-drill tip (Fine Science Tools, BC, Canada) under the microscope, using low-force long movements. We observed that constantly applying sterile saline or HBSS to the bone before drilling makes it soft and spongy, and smoothens the drilling process. (2.4.1) For a survival microinjection experiment, thin the bone until it is flexible under gentle pressure. Homogenize and polish the surface with a silicone polisher micro-drill tip. The bone will be made transparent via an HBSS or silicone oil-filled silicone chamber in step 2.5 . (2.4.2) For an acute electrode insertion, perform a craniotomy by carefully delineating an area of ∼3 millimeter (mm) × 3 mm, and thinning the perimeter of this area until it can be safely pierced. Then gently lift the central piece of bone, while avoiding damage to the brain. For electrophysiology recordings, place a stainless steel skull screw in a region of no-interest in the contralateral hemisphere to use as a ground and reference. (2.5) Around either the thinned or removed part of the bone, lay down in successive layers a thin-walled silicone chamber (Dow Corning, MI, United States). Allow it to harden, then fill it with HBSS. Make sure the silicone does not spill into the thinned bone nor into the craniotomy, by applying it in several small layers that build upon each other, before it hardens solid. (3) Stimulation (3.1) Set up the hardware, as required. Configure the sensory stimulation and OI-IS setups as shown in . (3.1.1) Turn on the stimulation system. In our setup, we use a constant current stimulus isolator (World Precision Instruments, FL, United States) to deliver bipolar impulses to a 0.58 mm-thick rectangular piezoelectric double-quick-mount actuator (Mide Technology – Piezo, MA, United States), which can deflect ± 270 microns. This deflection is amplified by extending the length of the device using a 3D-printed hollow plastic micropipette , although even a 200 micron deflection should be sufficient to elicit cortical responses . When the stimulus isolator delivers pulses of 400 microamperes, the 3D-printed micropipette will be displaced at a speed of approximately 35 microns per millisecond, optimal for eliciting cortical evoked responses (unpublished observations). (3.1.2) Turn on the impulse generator. In our setup, we use a Master-9 Programmable Pulse Stimulator [A.M.P.I., Israel] to deliver 245 milliseconds long square-wave pulses at 4 Hz to the piezoelectric actuator. (3.2) Prepare the somatosensory stimulation: insert each individual whisker inside the micropipette attached to the piezoelectric device, which is deflected with a ramp-hold-return paradigm at a frequency close to the rodent natural whisking range . The micropipette should ideally reach as close as 2 mm from the face, and deflect only rostro-caudally, a preferred direction for the whisker sensory system . Ideally, different micropipettes should be moved without touching any of the other micropipettes or intact whiskers. (4) Optical Imaging of Intrinsic Signals (4.1) Optical Imaging of Intrinsic Signals is performed with a monochrome Dalsa DS-21-01M60 camera fitted with a 60 mm AF Micro-Nikkor f/2.8D lens (Nikon Corporation, Japan), linked to a Brain Imager 3001M interface (Optical Imaging Ltd., Israel) and controlled by the VDAQ imaging software (Optical Imaging Ltd., Israel). Throughout all experiments, the camera resolution is 1024 × 1024 and frame rate is 30 Hz, down-sampled to a 10 Hz data frame rate. For electrophysiology insertion recordings, VZM1000i zoom lens with up to 10x-magnification (Edmund Optics, NJ, United States) in order to view and count the electrode’s upper contacts that remain above the cortical surface. This makes it possible to monitor the insertion of the probe and estimate the electrode’s cortical insertion depth. (4.2) Turn on the 530 nanometers (nm) LED (Mightex, CA, United States), and position it such that it illuminates the entire ROI uniformly, with the peak of luminosity at the center of the region intended for microinjections (based on atlas coordinates) or insertion of a neurophysiology probe (based on the optical imaging pursued in a previous imaging session, prior to performing the microinjections). Leave it on continuously while adjusting the position of the charge-coupled device camera. (4.3) Translate and rotate the camera until the entire ROI is within the field of view of the camera. Position the camera above the ROI, so that its optical axis is approximately orthogonal to its cortical surface. Define the imaged region within the field of view. (4.4) Adjust the LED output to maximize the luminosity values within the area imaged, while avoiding saturation. If there are any light reflections – such as reflections caused by the silicone chamber or the HBSS inside it – keep them outside of the imaged region or try repositioning the illumination light-guide. (4.5) Before each run, save an image of the pial vessels under green-light illumination, as a reference. The imaged ROI can be saved as a separate image, to be used in step 4.8. The topography of the cortical vessels can then be viewed in vivo using a surgical microscope, thus making it possible to guide the insertion of a micropipette or electrode to the small target area. It can also be used for analyzing whether the targeted module shifted for unexpected reasons. (4.6) Use the OI-IS system to image the response to stimulating each individual whisker of interest. Experimental runs consist of ten stimulation trials (Condition 1) interleaved with ten trials of spontaneous activity (Condition 0). Each stimulation trial consists of 2 s of baseline activity, 6 s of stimulation (in our case, bidirectional whisker piezoelectric deflections), and then 2 s with no stimulus, followed by an inter-trial interval of 7 s. Optical imaging is performed throughout all stimulation and spontaneous activity trials. (4.7) Compute a trial-by-trial single-condition map by dividing the average of images obtained during the response to the whisker (condition 1) of interest by the average of images obtained during the no-stimulus condition (condition 0; , ; , ). Alternatively, or in addition, compute a trial-by-trial differential response map by dividing the average of images obtained during the response to the whisker of interest by the average of images obtained during the response to stimulating a different whisker . Both in single condition analysis and differential analysis, we recommend subtracting the frame obtained just before the stimulation begins, to remove slow drifts in cerebral blood volume (CBV) and/or oxygenation. For each of the trial-by-trial single-condition map and differential maps, the results obtained from the ten trials (10 stimulation blocks) within a run are used for computing the mean and standard deviation (SD), to obtain an averaged stimulus-evoked response or a difference map for the current run. (4.8) On each of the hemodynamic response images, estimate the activated area using an automated (except for determining the statistical threshold for activation), objective algorithm, and then overlay this result on top of the ROI image from step 4.5 . The algorithm estimates the pixel-wise mean and SD of the relative response over stimulation blocks in one or more runs. We perform pixel-wise statistical testing of the null hypothesis that there is no difference between the mean response in the stimulation condition compared to the no stimulation condition; t -test, p < 0.01. This results in a binary map of pixels where the null hypothesis was accepted or rejected. We then mask out pixels located within pial vessels segmented from the OI image taken under illumination wavelength centered at 530 nm. To eliminate spurious response-like results from single pixels, we perform pixel by pixel neighborhood connectivity analysis on the binary map from which blood vessels regions were excluded, and eliminate all responses that form clusters of 7 or less ‘connected’ pixels. ‘A connected pixel’ is defined as any pixel adjacent to the currently analyzed pixel by sharing and edge or a corner (eight pixels neighborhood). Lastly, we compute the convex hull of the remaining clustered pixels in the binary image. (4.9) In case you stimulate whiskers individually in separate runs, repeat steps 3.2, and 4.5 – 4.8 for each whisker. In the end, superimpose a delineation of the responses of all whiskers of interest for a comprehensive overview of all the responses on the reference image of the pial vessels obtained under green light illumination. (5) Microinjection of Optogenetics Viral Vector Configure the viral stereotaxic microinjection setup as shown in . NOTE: Set up the microinjection apparatus and surgical area in accordance with your local government and university regulations, and following previous publications on the subject . We use a 10 microliter (μL) 701-RN glass micro-syringe (Hamilton, NV, United States) controlled by a PHD ULTRA programmable microinjection pump (Harvard Apparatus, MA, United States). We use a Syringe Priming Kit (Chromatographic Specialties Inc., ON, Canada) to load up just over 10 microliters of mineral oil (Millipore-Sigma Canada Co., ON, Canada). To minimize cortical damage, use borosilicate glass micropipettes (World Precision Instruments, FL, United States) pulled to an outer diameter of the tip of 30–100 microns or smaller. If these are not available in your lab, use 36 G or higher G needles (NanoFil, World Precision Instruments, FL, United States). Beveled needles and micropipettes will penetrate the dura easier, whereas blunt ones will expel a more controlled drop of viral solution. Keep in mind that the smaller the tip’s inner diameter is, the higher the chance of tissue backflow clogging it. To prevent this, apply a constant slightly positive pressure when moving the micropipette up or down through the cortex. (5.1) Load up the virus into the glass micropipette at a rate of 50–250 nanoliter (nL) per minute, using a piece of sterile parafilm or metal foil, which is non-reactive with the virus. It serves as a shallow non-porous ‘dish’ in which to safely deposit the virus, so that it gets taken up by the micropipette. Do not allow the virus to reach past the glass micropipette and into the syringe. For our optogenetics experiments, we used the virus AAV2/8-CAG-flex-ChR2-tdTomato-WPRE with a titer of 1.5e13 genome copies per ml, as prepared by the Neuro-Photonics Centre’s Molecular Tools Platform (Université Laval, QC, Canada). NOTE: Translation of a floxed or double-floxed inverted open-reading-frame viral genome depends exclusively on the spatially specific presence of the Cre-recombinase in Cre knock-in mice, allowing virtually 100% tropism for the targeted tissue/layer/cells . (5.2) Select sites for viral microinjections in close proximity around the modules of interest while considering the lateral spread of the virus, but strictly avoiding sites close to macroscopic blood vessels. Take note of the stereotaxic location of the selected sites relative to bregma. Importantly, the site selected for microinjection should be projected onto the image of the cortical surface and pial vessels . This will make it possible to guide the insertion of the micropipette to the selected site, while viewing the pial vessels using a surgical microscope. (5.3) For each site, gently pierce the thinned cranium, creating a small hole with a fine needle or scalpel, while flushing thoroughly with HBSS. (5.4) Position the glass micropipette above the insertion site, insert and lower it down to the desired cortical depth, while keeping positive pressure in it throughout the insertion to avoid clogging. The cortical depth of the insertion can be estimated relative to the point in which the micropipette first touched the surface of the cortex, based on continuous imaging of the insertion site using the OI system. Wait up to 5 min following the insertion of the micropipette, to allow the brain tissue to settle. Taking images of all micropipette insertions is highly recommended, as it is part of the documentation of the experiment. (5.5) Inject 100–150 nL of virus solution per site, at a rate of 20–100 nL per minute. Given the original titer, this provides 1.5–2.25e12 total genome copies per microinjection. While the viral spread and infection efficiency are also significant factors in opsin expression, it has been previously reported that at least 1e12 genome copies are sufficient for a cortical transduction volume of 1 mm 3 . Note that other researchers have used as low as 6e7 viral particles per injection, with excellent results . Wait 10 minutes for the injected solution to diffuse out into the tissue. (5.6) Retract the glass micropipette up while keeping a positive pressure inside. (5.7) Repeat steps 5.1–5.5 at other selected insertion sites. (5.8) After completing all the microinjections for an animal, clean the glass micropipette tip with sterile isotonic saline or HBSS drips. At the end of the microinjection session, drop the micropipette with any remaining virus in a solution of 0.5% sodium hypochlorite for at least 15 min, and then dispose it in a biohazard sharps container. Disinfect the surgical area and tools with a bleach- or peroxide-based solution, not an alcohol-based one. (5.9) Animal Recovery (5.9.1) Clean the treated area with sterile isotonic saline or HBSS. Close the skin flaps and suture them together using absorbable sutures of size 5-0 or 6-0 (Ethicon Inc., NJ, United States). (5.9.2) Apply Polysporin triple antibiotic cream with lidocaine topically (Johnson & Johnson Inc., NJ, United States) onto the sutured flaps. Administer isotonic sterile saline or dextrose solution subcutaneously in the back of the animal, to prevent dehydration. (5.9.3) Monitor the animal until it recovers from anesthesia and demonstrates full mobility. If no signs of discomfort are visible, return the animal to its cage. (5.9.4) Monitor daily and inject an analgesic agent (carprofen 5−10 mg/kg subcutaneous) for 3 days postoperatively. (6) Electrophysiology Recordings (6.1) Allow 3–6 weeks for the virus to incubate and express. (6.2) Repeat steps 1.1–4.9 of the protocol. (6.3) Select a site(s) for electrophysiology recording. The position is guided by the OI-IS responses obtained in the session that preceded the microinjections. We perform optical imaging in preparation for the neurophysiological recordings too, both for verification and evaluation of the functionality of the modules of interest following the virus microinjections. Strictly avoid electrode insertions close to macroscopic blood vessels. The images of the responses from the two sessions can be spatially registered by aligning the pial vessel images obtained in the two sessions . Similarly to the guidance of the insertions for microinjections, electrode insertions are also guided according to the image of the pial vessels ( , bottom-right panel). (6.4) Configure the setup for optogenetics photostimulation together with electrophysiology recordings, as shown in . If applicable, switch the OI-IS lens to a high-magnification lens, to make it possible to monitor the position of insertion and the depth of the insertion based on imaging the contacts of the probe. (6.5) Position an acute recording electrode above the selected site, with its recording axis orthogonal to the local surface of the cortex. If using a linear/laminar probe, estimate the angle relative to the cortex from multiple viewpoints, and – if needed – modify the insertion angle for an approximate orthogonal orientation relative to the cortical manifold. For post-experiment localization of the electrode track, gently dip the recording electrode shank a few millimeters in a DiI Vybrant (Life Technologies, CA, United States) cell-labeling solution prior to inserting it into its final position. NOTE: Electrophysiological signals sampled at 24,414 Hz are pre-processed by a PZ5 NeuroDigitizer 128-channel preamplifier (Tucker-Davis Technologies, FL, United States), and then processed and recorded using the Synapse Suite software (Tucker-Davis Technologies, FL, United States). For mouse experiments that do not require electrolytic micro-lesions, we use A1 × 32-50-177 probes with a 50 micron thick shaft (NeuroNexus, MI, United States). (6.6) Place an optic fiber connected to high-powered light-emitting diode immediately next to the electrode. The optic fiber should be positioned approximately 0.5 mm from the dura mater, pointing to the region of cortex where the electrode is inserted, which is expected to be infected by the previously injected virus. NOTE: Use Dr. Karl Deisseroth’s link at: https://web.stanford.edu/group/dlab/cgi-bin/graph/chart.php for a “Brain tissue light transmission calculator” to predicted irradiance values from a given user-defined optic fiber through standard mammalian brain tissue. For example, our experiments used a multimode optic fiber with numerical aperture of 0.37 and 1 mm inner core diameter (Mightex, CA, United States), connected by SMA to a high-power, fiber-coupled, 470 nm LED (ThorLabs Inc., NJ, United States) for exciting the ChR2 opsin. For an LED light power output of 6.4 milliWatt (mW), the irradiance measured at the fiber tip is 2.03 mW/mm 2 , as verified before each experiment using a digital handheld power meter (ThorLabs Inc., NJ, United States). Then, the calculated irradiance value at a cortical depth of 100 micron is 1.43 mW/mm 2 , and at 1 mm deep, it is 0.1 mW/mm 2 . Choose the optic fiber parameters based on the calculations of brain tissue volume intended to be recruited by photo-stimulation, as per your experimental needs . Power outputs of up to 20 mW/mm 2 are safe to use in neurons in vivo . Conversely, even sub-mW light intensities are sufficient to elicit optogenetics effects, although the induced voltage changes from the resting membrane potential will be understandably smaller . While monitoring with the high-magnification lens attached to the OI-IS camera, slowly insert the electrode down to the desired cortical depth. This can be estimated by the number of contacts that remain visible above the cortical surface, and by taking into account the geometry of the probe, such as the arrangement of contacts and the distance between them . Wait 5 min for the brain tissue to settle. (6.7) Record the responses to the planned combinations of sensory stimulations and/or LED optogenetics photostimulation. We use the same experimental paradigm as in steps 3.1.2 and 4.6, except that we turn on the optogenetics photostimulation 2.25 s after the first sensory stimulation, and turn it off 2 s later. Applying the calculations from step 6.6, we use an exponential series of eight LED power outputs, from 0.1 to 12.8 mW, as measured at the tip of the optic fiber. Typically, this encompasses the full range of optogenetics effects, as 0.1 mW elicits negligible effects, whereas 12.8 mW virtually saturates the system. As a control, photo-stimulation in opsin-negative mice, whether wild-types injected with a Cre-dependent viral vector or mice expressing local Cre recombinase injected with a virus containing no opsin genome, should produce no observable optogenetics effects . (7) Post-experiment Histology Evaluation (7.1) At the end of the recording experiment, euthanize and perfuse the animal according to your institutional guidelines, using isotonic saline and 4% paraformaldehyde solution in phosphate buffered saline. (7.2) Extract and fixate the brain. In order to confirm the location of the electrode, flatten the cortical hemisphere containing the ROI by removing the contralateral hemisphere if not needed , gently scooping out the brainstem and sub-cortical parts, and placing a light flat weight made from a non-reactive material (we use an empty 15 mL glass Erlenmeyer flask), on top of the cortex, which will then be submerged in fixative. (7.3) When fixation is complete, perform your histology protocol to obtain slices parallel to the cortical surface. Frozen fixed mouse brain blocks are sectioned to obtain 30 micron-thick slices using a cryostat (Leica Biosystems, Germany), although 40 microns is safer for fragile tissues. We use triple fluorescent slices [DiI and the opsins’ fluorescent tags, counterstained with 4’,6-diamidino-2-phenylindole (DAPI) to visualize cell bodies] in conjunction with interleaved slices stained with cytochrome oxidase to visualize S1BF barrels . To verify opsin expression and tropism, a typical protocol involves successive steps in 0.1% Triton X-100 (MilliporeSigma, MA, United States) to permeabilize cell membranes; in normal donkey or horse serum (MilliporeSigma, MA, United States) step to minimize non-specific binding; in the primary antibody usually overnight; finally, in the secondary antibody with fluorescent tags, which comes from a different species than the primary .
All procedures were approved by the animal care committees of the Montreal Neurological Institute and McGill University and were carried out in accordance with the guidelines of the Canadian Council on Animal Care. Adult C57BL/6 10–15 weeks old female and male mice were used for all experiments. The choice of mice, and their genotype and phenotype must be made judiciously according to the specific experimental needs. A list of equipment items and materials commonly used in the experiments we describe is provided in . Before experiments, sterilize surgical instruments using a hot bead sterilizer (Germinator 500, Stoelting, IL, United States) or by autoclaving. Apply aseptic protocols to the surgery and recovery areas. (1.1) Induce and then systematically maintain an appropriate plane of anesthesia and analgesia for the surgical procedure. We use ‘Mouse Cocktail’ combination of ketamine 80–100 mg/kg, xylazine 10 mg/kg and acepromazine 2.5–3 mg/kg, injected I.P. to induce a surgical plane of anesthesia, followed by ketamine 80–100 mg/kg and xylazine 10 mg/kg to maintain anesthesia . For analgesia, we inject an initial one-time bolus of carprofen 5–10 mg/kg subcutaneously . To verify the surgical level of anesthesia, check for the absence of whisking and withdrawal reflex during a hindpaw painful pinch, and the absence of blinking upon eye contact (to be done while also constantly hydrating the cornea with a protective ophthalmic ointment). In addition, monitor the heartbeat, and make sure the respiration is regular with no signs of gasping . The anesthetics used should maintain neurophysiological activity and neurovascular coupling as much as possible unchanged. For this, a light plane of anesthesia during the recording sessions must be kept constant by systematically monitoring the vital signs and reflexes, as well as the electrophysiology readout . Any systematic increases in the heart beat or respiration rate must be counteracted by additional low doses of injectable anesthetic. Conversely, if the vital measures decrease and the spontaneous electrophysiological activity is visibly poor, provide the appropriate antagonist . (1.2) If using a piezoelectric whisker stimulation, tape or cut away all the same-side whiskers that will not be stimulated during the experiment. Use the surgical microscope to identify these whiskers and ensure their cutting. (1.3) Position the animal in a small-animal stereotaxic frame (David Kopf Instruments, CA, United States) in a manner consistent with the conventions of the reference atlas, and provide free-flowing oxygen via a nose cone . During electrophysiology recordings, switch to a mixture of 70% medical air and 30% oxygen. To reduce discomfort, use non-penetrating ear bars, covered with a drop of Xylocaine ointment (Aspen Pharmacare Canada Inc., ON, Canada).
(2.1) Cut the skin longitudinally along the midline with a scalpel and retract it laterally with a clamp. Remove soft tissue and dry off the exposed skull surface using cotton swabs. Administer topical epinephrine 1 mg/mL (Epiclor, McCarthy & Sons Service, AB, Canada) sparingly or sterile isotonic 0.9% NaCl saline in case of muscle or bone bleeding. (2.2) Flush the surgical site with small amounts of topical lidocaine hydrochloride 2% (Wyeth, NJ, United States). As soon as the bone has been pierced, do not use lidocaine nor epinephrine, as they will modify the animal’s physiology and brain state. Instead, use sterile isotonic 0.9% NaCl saline (Baxter Healthcare Corporation, IL, United States) or preferably Hanks’ Balanced Salt solution (HBSS) (MilliporeSigma Canada Co., ON, Canada) to thoroughly clean the surgical site. Because bleeding can impact the quality of the OI-IS, any sources of bleeding must be controlled immediately using persistent flushing with HBSS and absorbing the mixture of blood and HBSS with cotton swabs or Sugi cellulose absorbent triangles (Kettenbach GmbH & Co. KG, Germany) without ever touching the actual brain surface or the dura mater. (2.3) Find on the skull the bregma, the rostrocaudal and mediolateral coordinates for the cortical region of interest . (2.4) Drill the cranium with a fine micro-drill tip (Fine Science Tools, BC, Canada) under the microscope, using low-force long movements. We observed that constantly applying sterile saline or HBSS to the bone before drilling makes it soft and spongy, and smoothens the drilling process. (2.4.1) For a survival microinjection experiment, thin the bone until it is flexible under gentle pressure. Homogenize and polish the surface with a silicone polisher micro-drill tip. The bone will be made transparent via an HBSS or silicone oil-filled silicone chamber in step 2.5 . (2.4.2) For an acute electrode insertion, perform a craniotomy by carefully delineating an area of ∼3 millimeter (mm) × 3 mm, and thinning the perimeter of this area until it can be safely pierced. Then gently lift the central piece of bone, while avoiding damage to the brain. For electrophysiology recordings, place a stainless steel skull screw in a region of no-interest in the contralateral hemisphere to use as a ground and reference. (2.5) Around either the thinned or removed part of the bone, lay down in successive layers a thin-walled silicone chamber (Dow Corning, MI, United States). Allow it to harden, then fill it with HBSS. Make sure the silicone does not spill into the thinned bone nor into the craniotomy, by applying it in several small layers that build upon each other, before it hardens solid.
(3.1) Set up the hardware, as required. Configure the sensory stimulation and OI-IS setups as shown in . (3.1.1) Turn on the stimulation system. In our setup, we use a constant current stimulus isolator (World Precision Instruments, FL, United States) to deliver bipolar impulses to a 0.58 mm-thick rectangular piezoelectric double-quick-mount actuator (Mide Technology – Piezo, MA, United States), which can deflect ± 270 microns. This deflection is amplified by extending the length of the device using a 3D-printed hollow plastic micropipette , although even a 200 micron deflection should be sufficient to elicit cortical responses . When the stimulus isolator delivers pulses of 400 microamperes, the 3D-printed micropipette will be displaced at a speed of approximately 35 microns per millisecond, optimal for eliciting cortical evoked responses (unpublished observations). (3.1.2) Turn on the impulse generator. In our setup, we use a Master-9 Programmable Pulse Stimulator [A.M.P.I., Israel] to deliver 245 milliseconds long square-wave pulses at 4 Hz to the piezoelectric actuator. (3.2) Prepare the somatosensory stimulation: insert each individual whisker inside the micropipette attached to the piezoelectric device, which is deflected with a ramp-hold-return paradigm at a frequency close to the rodent natural whisking range . The micropipette should ideally reach as close as 2 mm from the face, and deflect only rostro-caudally, a preferred direction for the whisker sensory system . Ideally, different micropipettes should be moved without touching any of the other micropipettes or intact whiskers.
(4.1) Optical Imaging of Intrinsic Signals is performed with a monochrome Dalsa DS-21-01M60 camera fitted with a 60 mm AF Micro-Nikkor f/2.8D lens (Nikon Corporation, Japan), linked to a Brain Imager 3001M interface (Optical Imaging Ltd., Israel) and controlled by the VDAQ imaging software (Optical Imaging Ltd., Israel). Throughout all experiments, the camera resolution is 1024 × 1024 and frame rate is 30 Hz, down-sampled to a 10 Hz data frame rate. For electrophysiology insertion recordings, VZM1000i zoom lens with up to 10x-magnification (Edmund Optics, NJ, United States) in order to view and count the electrode’s upper contacts that remain above the cortical surface. This makes it possible to monitor the insertion of the probe and estimate the electrode’s cortical insertion depth. (4.2) Turn on the 530 nanometers (nm) LED (Mightex, CA, United States), and position it such that it illuminates the entire ROI uniformly, with the peak of luminosity at the center of the region intended for microinjections (based on atlas coordinates) or insertion of a neurophysiology probe (based on the optical imaging pursued in a previous imaging session, prior to performing the microinjections). Leave it on continuously while adjusting the position of the charge-coupled device camera. (4.3) Translate and rotate the camera until the entire ROI is within the field of view of the camera. Position the camera above the ROI, so that its optical axis is approximately orthogonal to its cortical surface. Define the imaged region within the field of view. (4.4) Adjust the LED output to maximize the luminosity values within the area imaged, while avoiding saturation. If there are any light reflections – such as reflections caused by the silicone chamber or the HBSS inside it – keep them outside of the imaged region or try repositioning the illumination light-guide. (4.5) Before each run, save an image of the pial vessels under green-light illumination, as a reference. The imaged ROI can be saved as a separate image, to be used in step 4.8. The topography of the cortical vessels can then be viewed in vivo using a surgical microscope, thus making it possible to guide the insertion of a micropipette or electrode to the small target area. It can also be used for analyzing whether the targeted module shifted for unexpected reasons. (4.6) Use the OI-IS system to image the response to stimulating each individual whisker of interest. Experimental runs consist of ten stimulation trials (Condition 1) interleaved with ten trials of spontaneous activity (Condition 0). Each stimulation trial consists of 2 s of baseline activity, 6 s of stimulation (in our case, bidirectional whisker piezoelectric deflections), and then 2 s with no stimulus, followed by an inter-trial interval of 7 s. Optical imaging is performed throughout all stimulation and spontaneous activity trials. (4.7) Compute a trial-by-trial single-condition map by dividing the average of images obtained during the response to the whisker (condition 1) of interest by the average of images obtained during the no-stimulus condition (condition 0; , ; , ). Alternatively, or in addition, compute a trial-by-trial differential response map by dividing the average of images obtained during the response to the whisker of interest by the average of images obtained during the response to stimulating a different whisker . Both in single condition analysis and differential analysis, we recommend subtracting the frame obtained just before the stimulation begins, to remove slow drifts in cerebral blood volume (CBV) and/or oxygenation. For each of the trial-by-trial single-condition map and differential maps, the results obtained from the ten trials (10 stimulation blocks) within a run are used for computing the mean and standard deviation (SD), to obtain an averaged stimulus-evoked response or a difference map for the current run. (4.8) On each of the hemodynamic response images, estimate the activated area using an automated (except for determining the statistical threshold for activation), objective algorithm, and then overlay this result on top of the ROI image from step 4.5 . The algorithm estimates the pixel-wise mean and SD of the relative response over stimulation blocks in one or more runs. We perform pixel-wise statistical testing of the null hypothesis that there is no difference between the mean response in the stimulation condition compared to the no stimulation condition; t -test, p < 0.01. This results in a binary map of pixels where the null hypothesis was accepted or rejected. We then mask out pixels located within pial vessels segmented from the OI image taken under illumination wavelength centered at 530 nm. To eliminate spurious response-like results from single pixels, we perform pixel by pixel neighborhood connectivity analysis on the binary map from which blood vessels regions were excluded, and eliminate all responses that form clusters of 7 or less ‘connected’ pixels. ‘A connected pixel’ is defined as any pixel adjacent to the currently analyzed pixel by sharing and edge or a corner (eight pixels neighborhood). Lastly, we compute the convex hull of the remaining clustered pixels in the binary image. (4.9) In case you stimulate whiskers individually in separate runs, repeat steps 3.2, and 4.5 – 4.8 for each whisker. In the end, superimpose a delineation of the responses of all whiskers of interest for a comprehensive overview of all the responses on the reference image of the pial vessels obtained under green light illumination.
Configure the viral stereotaxic microinjection setup as shown in . NOTE: Set up the microinjection apparatus and surgical area in accordance with your local government and university regulations, and following previous publications on the subject . We use a 10 microliter (μL) 701-RN glass micro-syringe (Hamilton, NV, United States) controlled by a PHD ULTRA programmable microinjection pump (Harvard Apparatus, MA, United States). We use a Syringe Priming Kit (Chromatographic Specialties Inc., ON, Canada) to load up just over 10 microliters of mineral oil (Millipore-Sigma Canada Co., ON, Canada). To minimize cortical damage, use borosilicate glass micropipettes (World Precision Instruments, FL, United States) pulled to an outer diameter of the tip of 30–100 microns or smaller. If these are not available in your lab, use 36 G or higher G needles (NanoFil, World Precision Instruments, FL, United States). Beveled needles and micropipettes will penetrate the dura easier, whereas blunt ones will expel a more controlled drop of viral solution. Keep in mind that the smaller the tip’s inner diameter is, the higher the chance of tissue backflow clogging it. To prevent this, apply a constant slightly positive pressure when moving the micropipette up or down through the cortex. (5.1) Load up the virus into the glass micropipette at a rate of 50–250 nanoliter (nL) per minute, using a piece of sterile parafilm or metal foil, which is non-reactive with the virus. It serves as a shallow non-porous ‘dish’ in which to safely deposit the virus, so that it gets taken up by the micropipette. Do not allow the virus to reach past the glass micropipette and into the syringe. For our optogenetics experiments, we used the virus AAV2/8-CAG-flex-ChR2-tdTomato-WPRE with a titer of 1.5e13 genome copies per ml, as prepared by the Neuro-Photonics Centre’s Molecular Tools Platform (Université Laval, QC, Canada). NOTE: Translation of a floxed or double-floxed inverted open-reading-frame viral genome depends exclusively on the spatially specific presence of the Cre-recombinase in Cre knock-in mice, allowing virtually 100% tropism for the targeted tissue/layer/cells . (5.2) Select sites for viral microinjections in close proximity around the modules of interest while considering the lateral spread of the virus, but strictly avoiding sites close to macroscopic blood vessels. Take note of the stereotaxic location of the selected sites relative to bregma. Importantly, the site selected for microinjection should be projected onto the image of the cortical surface and pial vessels . This will make it possible to guide the insertion of the micropipette to the selected site, while viewing the pial vessels using a surgical microscope. (5.3) For each site, gently pierce the thinned cranium, creating a small hole with a fine needle or scalpel, while flushing thoroughly with HBSS. (5.4) Position the glass micropipette above the insertion site, insert and lower it down to the desired cortical depth, while keeping positive pressure in it throughout the insertion to avoid clogging. The cortical depth of the insertion can be estimated relative to the point in which the micropipette first touched the surface of the cortex, based on continuous imaging of the insertion site using the OI system. Wait up to 5 min following the insertion of the micropipette, to allow the brain tissue to settle. Taking images of all micropipette insertions is highly recommended, as it is part of the documentation of the experiment. (5.5) Inject 100–150 nL of virus solution per site, at a rate of 20–100 nL per minute. Given the original titer, this provides 1.5–2.25e12 total genome copies per microinjection. While the viral spread and infection efficiency are also significant factors in opsin expression, it has been previously reported that at least 1e12 genome copies are sufficient for a cortical transduction volume of 1 mm 3 . Note that other researchers have used as low as 6e7 viral particles per injection, with excellent results . Wait 10 minutes for the injected solution to diffuse out into the tissue. (5.6) Retract the glass micropipette up while keeping a positive pressure inside. (5.7) Repeat steps 5.1–5.5 at other selected insertion sites. (5.8) After completing all the microinjections for an animal, clean the glass micropipette tip with sterile isotonic saline or HBSS drips. At the end of the microinjection session, drop the micropipette with any remaining virus in a solution of 0.5% sodium hypochlorite for at least 15 min, and then dispose it in a biohazard sharps container. Disinfect the surgical area and tools with a bleach- or peroxide-based solution, not an alcohol-based one. (5.9) Animal Recovery (5.9.1) Clean the treated area with sterile isotonic saline or HBSS. Close the skin flaps and suture them together using absorbable sutures of size 5-0 or 6-0 (Ethicon Inc., NJ, United States). (5.9.2) Apply Polysporin triple antibiotic cream with lidocaine topically (Johnson & Johnson Inc., NJ, United States) onto the sutured flaps. Administer isotonic sterile saline or dextrose solution subcutaneously in the back of the animal, to prevent dehydration. (5.9.3) Monitor the animal until it recovers from anesthesia and demonstrates full mobility. If no signs of discomfort are visible, return the animal to its cage. (5.9.4) Monitor daily and inject an analgesic agent (carprofen 5−10 mg/kg subcutaneous) for 3 days postoperatively.
(6.1) Allow 3–6 weeks for the virus to incubate and express. (6.2) Repeat steps 1.1–4.9 of the protocol. (6.3) Select a site(s) for electrophysiology recording. The position is guided by the OI-IS responses obtained in the session that preceded the microinjections. We perform optical imaging in preparation for the neurophysiological recordings too, both for verification and evaluation of the functionality of the modules of interest following the virus microinjections. Strictly avoid electrode insertions close to macroscopic blood vessels. The images of the responses from the two sessions can be spatially registered by aligning the pial vessel images obtained in the two sessions . Similarly to the guidance of the insertions for microinjections, electrode insertions are also guided according to the image of the pial vessels ( , bottom-right panel). (6.4) Configure the setup for optogenetics photostimulation together with electrophysiology recordings, as shown in . If applicable, switch the OI-IS lens to a high-magnification lens, to make it possible to monitor the position of insertion and the depth of the insertion based on imaging the contacts of the probe. (6.5) Position an acute recording electrode above the selected site, with its recording axis orthogonal to the local surface of the cortex. If using a linear/laminar probe, estimate the angle relative to the cortex from multiple viewpoints, and – if needed – modify the insertion angle for an approximate orthogonal orientation relative to the cortical manifold. For post-experiment localization of the electrode track, gently dip the recording electrode shank a few millimeters in a DiI Vybrant (Life Technologies, CA, United States) cell-labeling solution prior to inserting it into its final position. NOTE: Electrophysiological signals sampled at 24,414 Hz are pre-processed by a PZ5 NeuroDigitizer 128-channel preamplifier (Tucker-Davis Technologies, FL, United States), and then processed and recorded using the Synapse Suite software (Tucker-Davis Technologies, FL, United States). For mouse experiments that do not require electrolytic micro-lesions, we use A1 × 32-50-177 probes with a 50 micron thick shaft (NeuroNexus, MI, United States). (6.6) Place an optic fiber connected to high-powered light-emitting diode immediately next to the electrode. The optic fiber should be positioned approximately 0.5 mm from the dura mater, pointing to the region of cortex where the electrode is inserted, which is expected to be infected by the previously injected virus. NOTE: Use Dr. Karl Deisseroth’s link at: https://web.stanford.edu/group/dlab/cgi-bin/graph/chart.php for a “Brain tissue light transmission calculator” to predicted irradiance values from a given user-defined optic fiber through standard mammalian brain tissue. For example, our experiments used a multimode optic fiber with numerical aperture of 0.37 and 1 mm inner core diameter (Mightex, CA, United States), connected by SMA to a high-power, fiber-coupled, 470 nm LED (ThorLabs Inc., NJ, United States) for exciting the ChR2 opsin. For an LED light power output of 6.4 milliWatt (mW), the irradiance measured at the fiber tip is 2.03 mW/mm 2 , as verified before each experiment using a digital handheld power meter (ThorLabs Inc., NJ, United States). Then, the calculated irradiance value at a cortical depth of 100 micron is 1.43 mW/mm 2 , and at 1 mm deep, it is 0.1 mW/mm 2 . Choose the optic fiber parameters based on the calculations of brain tissue volume intended to be recruited by photo-stimulation, as per your experimental needs . Power outputs of up to 20 mW/mm 2 are safe to use in neurons in vivo . Conversely, even sub-mW light intensities are sufficient to elicit optogenetics effects, although the induced voltage changes from the resting membrane potential will be understandably smaller . While monitoring with the high-magnification lens attached to the OI-IS camera, slowly insert the electrode down to the desired cortical depth. This can be estimated by the number of contacts that remain visible above the cortical surface, and by taking into account the geometry of the probe, such as the arrangement of contacts and the distance between them . Wait 5 min for the brain tissue to settle. (6.7) Record the responses to the planned combinations of sensory stimulations and/or LED optogenetics photostimulation. We use the same experimental paradigm as in steps 3.1.2 and 4.6, except that we turn on the optogenetics photostimulation 2.25 s after the first sensory stimulation, and turn it off 2 s later. Applying the calculations from step 6.6, we use an exponential series of eight LED power outputs, from 0.1 to 12.8 mW, as measured at the tip of the optic fiber. Typically, this encompasses the full range of optogenetics effects, as 0.1 mW elicits negligible effects, whereas 12.8 mW virtually saturates the system. As a control, photo-stimulation in opsin-negative mice, whether wild-types injected with a Cre-dependent viral vector or mice expressing local Cre recombinase injected with a virus containing no opsin genome, should produce no observable optogenetics effects .
(7.1) At the end of the recording experiment, euthanize and perfuse the animal according to your institutional guidelines, using isotonic saline and 4% paraformaldehyde solution in phosphate buffered saline. (7.2) Extract and fixate the brain. In order to confirm the location of the electrode, flatten the cortical hemisphere containing the ROI by removing the contralateral hemisphere if not needed , gently scooping out the brainstem and sub-cortical parts, and placing a light flat weight made from a non-reactive material (we use an empty 15 mL glass Erlenmeyer flask), on top of the cortex, which will then be submerged in fixative. (7.3) When fixation is complete, perform your histology protocol to obtain slices parallel to the cortical surface. Frozen fixed mouse brain blocks are sectioned to obtain 30 micron-thick slices using a cryostat (Leica Biosystems, Germany), although 40 microns is safer for fragile tissues. We use triple fluorescent slices [DiI and the opsins’ fluorescent tags, counterstained with 4’,6-diamidino-2-phenylindole (DAPI) to visualize cell bodies] in conjunction with interleaved slices stained with cytochrome oxidase to visualize S1BF barrels . To verify opsin expression and tropism, a typical protocol involves successive steps in 0.1% Triton X-100 (MilliporeSigma, MA, United States) to permeabilize cell membranes; in normal donkey or horse serum (MilliporeSigma, MA, United States) step to minimize non-specific binding; in the primary antibody usually overnight; finally, in the secondary antibody with fluorescent tags, which comes from a different species than the primary .
Our first methodological objective is to inject an optogenetics virus in the mouse barrel field around a single barrel, to infect both this barrel as well as its immediate neighbors, while ensuring that the barrel itself is not damaged . For guiding microinjections which are followed by the recovery of the animal, we propose to perform minimally invasive OI-IS through the thinned skull and gently break the surface of the skull at the selected injection points. Following an incubation period of 21–42 days , we repeat the OI-IS in order to identify the target barrel and evaluate whether any injection-related damage could hamper its functionality . This makes it possible to guide the insertion of an electrode within the barrel and optimize the positioning of the optic fiber attached to the LED or laser photo-stimulation. For this part of the experiment, we propose to perform a craniotomy, which makes it possible to obtain sharp images of the cortical surface, and to insert the electrode or multi-contact probe safely. In addition to guiding the insertion of the electrode to the pre-determined barrel, the user can use the OI system to estimate the spatial extent of the optogenetics photostimulation, by comparing the image obtained under the fiber-optic illumination to the pial vessels around the optic fiber . In our experiments, the OI-IS hemodynamic responses obtained before the microinjections and following the incubation period were consistent: a single barrel’s localizations before and after the incubation period were always overlapping . Thus, imaging post incubation is required for evaluating whether post-injection tissue damage interferes with the response of the module of interest or if the user needs to evaluate plasticity of the organization. Assuming that the virus on its own does not cause plasticity, the guidance of the neurophysiology can be based on the OI-IS results obtained before the microinjections and the topography of the cortical vessels . The expected outcome of the OI-IS is a well-delineated area of hemodynamic activation, verifying the original shape of – and centered on – the targeted structure. Ideally, two or more small structures such as barrels can thus be delineated and differentiated, with minimal overlap . To validate our proposed method for OI-IS guided insertion of an electrode into a small cortical functional domain – a predetermined barrel, we performed histology of tissue slices cut tangential to the surface of the flattened brain. Barrels were stained with cytochrome oxidase, and cells were stained with the DAPI nuclear stain. By default, the microinjection sites should not generate clearly visible long-lasting marks on the cortex. To mark the electrode insertion track, we dipped the electrode in DiI prior to insertion, in order to leave a fluorescent mark on a DAPI stained background, and to compare to the barrel field map obtained with cytochrome oxidase staining . All our fluorescence histology slices were imaged at the appropriate filters for the three respective excitation wavelengths of DAPI, DiI and the opsin fluorescent tag. Therefore, each slice outputs three images that are perfectly co-aligned, with each pixel having a one-to-one spatial association in all three images. Thus, after performing alignment between the cytochrome oxidase and DAPI images, the DiI image needs only to be super-imposed on its matching DAPI image. Alignment of penetrating cortical blood vessels is used to optimize the fine-scale registration of two consecutive histology slices , even with different staining. , demonstrate that the insertion was into the center of the pre-determined barrel – validating the method we propose for guiding the microinjections and electrode insertion at high-precision.
Our proposed methodology aims to guide viral microinjections of viral vectors around – and electrode insertion into – a pre-defined small functional module. Given the delicate, long-term nature of optogenetics experiments as well as the efforts they require, it is important to have the viral microinjections and electrode insertions precisely in their intended locations. Optical-Imaging-Based Guidance of Optogenetics Viral Injections and Electrode Insertions Optical Imaging of Intrinsic Signals is a widely used and easy to implement functional imaging technique with high spatial specificity and resolution . Relatively low-cost hardware is required for implementing OI-IS: a charge-coupled device or CMOS camera with a standard 50 mm – 60 mm lens, and an image acquisition system. The systems to generate the stimuli are required for the main experiment, independent of the OI-IS-based guidance that we propose. There are several facets of an optogenetics experiment that can be improved with this setup. These include the precise guidance of viral microinjections and the recording probe around or into a small functional module, and guiding the positioning of the optic fiber by estimating the region excited by the optogenetics illumination. By switching to a zoom lens, the experimenter can monitor the probe’s contacts at high-magnification, making it possible to control the depth of the electrode insertion. While the current experiments have focused on barrels in mouse area S1BF, OI-IS can be used to localize several distinct cortical areas and modules of interest to deliver optogenetics viruses, photostimulate optogenetically, and record from multiple sites. Given its non-invasive nature, it would also be ideal for reading the long-term chronic-effects of optogenetics stimulation. Finally, OI-IS would be ideal for guiding optogenetics experiments in non-human primates and marmosets. The main sources of error during OI-IS and their troubleshooting have been discussed in detail in the Results section. When used properly, this technique provides reliable, consistent identification of functional modules on the scale of hundreds of microns, to a degree of precision not attainable by using atlas coordinates and/or by trial-and-error of electrode insertion. We have demonstrated the accuracy of the OI-IS-guided insertions by post-experiment histology of the flattened brain. The DiI-marked electrode track is located inside the targeted barrel of interest, as identified by cytochrome oxidase and by counterstaining techniques such as DAPI. Additional immunohistochemistry options are available, such as using NeuN as the counterstain or using specific markers to identify the user’s particular target that would then be co-localized with the DiI from the recording site. Whereas OI-IS requires craniotomy in large animals, it can be performed through the thinned skull in rodents. However, imaging through the skull blurs the images because of the light scattering caused by the bone. When imaging through the skull, a commonly observed issue is spatially blurred images of the pial vessels and hemodynamic response. The first step is to make sure that the lens is focused on the pial vessels. If the blurring persists, additional thinning of the skull may help. To overcome blurring when targeting small functional modules on the scale of hundreds of microns, imaging the cortex following craniotomy can be pursued (compare rows A and B in each of , ). Resecting the dura mater is required for imaging in large animals. In rodents – especially in mice – resecting the dura mater is not a condition for imaging; however, to obtain sharp images, it is recommended to resect the dura mater in rodents too. Therefore, the user can evaluate the tradeoff between the degrees of invasiveness against the spatial precision required for the guidance. Another issue that we have commonly observed is that the activated region is larger than the corresponding anatomical structure , possibly because the stimulus may activate adjacent regions that are connected to the stimulated barrel (e.g., neighboring barrels) . To overcome this issue, the user can ensure a more balanced stimulus, such as reduced total duration, amplitude and/or frequency of whisker deflection . Importantly, a hemodynamic response to stimulating a single module can overlap with neighboring modules. To address this issue, we recommend to image the responses to stimulation of the neighboring modules separately. Then, differential analysis of the different responses can be used to remove the common response and present the spatial contrast, as we as we demonstrate in . Differential analysis eliminates the common response and enhances the visualization of the specific representation of the stimulus/module/barrel of interest . Illuminating at an isosbestic wavelength of 550, 569, or 586 nm measures the total Hb content and, by extension, the CBV . Functional imaging studies indicate that CBV responses co-localize faithfully to sites of increases in neural activity whereas the patterns of changes in deoxygenated blood are most prominent in draining veins . Given the importance of spatial precision in identifying the pre-defined cortical column, we exclusively use green 530 nm illumination for both surface vasculature reference images and OI-IS. While 530 nm illumination shows a clear pattern of the pial vessels, it also provides the best contrast to noise (CNR) ratio. In other words, it gives a clear functional image in a short time frame. This feature is important for using OI-IS for guiding microinjections and insertion of electrodes to small functional modules, because the OI-IS stage has to be short. In addition, green illumination reflects changes in CBV, which show spatial specificity to the site of increased neuronal activity at a level comparable to that obtained by the OI-IS initial deep . A critical aspect of OI-IS is the need of maintaining appropriate anesthesia, as this may influence both the neuronal and hemodynamic responses . It is critical for the quality of the experiment to avoid anesthetics that interfere with the cortical blood flow or with neurovascular coupling . Isoflurane represents a non-optimal choice, as it depresses evoked responses and is a vasodilator at typical regimes . In our mouse experiments, we use a combination of ketamine and xylazine or dexmedetomidine and isoflurane administered at low-percentage . OI-IS Is Optimal for Guidance of Insertions Around and Into Fine-Scale Cortical Modules Optical Imaging of Intrinsic Signals relies solely on intrinsic neurovascular elements and does not require adding an extrinsic indicator of neuronal activity . Thus, it requires no additional injections of viruses for the purpose of imaging , which may damage the cortical module of interest. Membrane-bound dyes such as voltage-sensitive dyes (VSD) used in vivo report voltage changes in neurons at an excellent temporal and spatial resolution . The VSD pharmacological and cytotoxic side effects have recently been alleviated to near-negligible levels, using newer generations of blue dyes and lower dye concentrations . Compared to in vivo VSD imaging, OI-IS is an indirect indicator of neural activity. Nevertheless, while VSD imaging has undeniable advantages for imaging neuronal membrane potentials, OI-IS provides faster functional mapping in space because VSDs require 1–2 h to penetrate cortex and bind to the neurons’ membranes . Obtaining the mapping from OI-IS faster than with VSD is important for OI-based guidance of viral microinjections, because it reduces the time under anesthesia in recovery experiments. Similarly, in acute experiments, OI-IS makes it possible to start the neurophysiological recordings earlier than VSD does, thus reducing effects of accumulated anesthesia during the recordings. In addition, OI-IS is minimally invasive when performed through a thinned skull, whereas VSD imaging requires craniotomy. Although VSD imaging can be successfully combined with optogenetics , it requires a judicious choice of dyes and opsins and a more advanced photostimulation/imaging setup . We posit that for guiding insertions of microinjection pipettes and/or electrodes into fine-scale functional modules, OI-IS is superior to VSD imaging. Consideration of Selecting the Viral Vector, Serotype, and Promoter for Applying Optogenetics in Fine-Scale Cortical Modules For combining OI-IS with optogenetics, it is important to consider the bands of wavelengths for the OI-IS illumination and for exciting the optogenetic opsin. If these distributions overlap considerably, the OI-IS illumination will excite the optogentic opsin, which will, in turn, manipulate the neuronal activity. This is especially important if the OI-IS serves as a readout to quantify the effect of the optogenetic manipulation. However, to prevent undesired effects, it is important to consider the distributions of wavelengths also for using OI-IS to guide the insertion of probes into brains that already carry the optogenetic opsin. Here we imaged relative changes in total hemoglobin using a narrow band of wavelengths centered on 530 nm. The optogenetics opsin we used is ChR2, whose maximum sensitivity is at 466 nm . At 530 nm, ChR2’s sensitivity drops to 21% of the maximum sensitivity. Given that the power used for OI-IS illumination is lower than that used for optogenetics, we expect that the effect of the OI-IS illumination on the neuronal activity through excitation of the optogenetics opsin is diminished. In animal models, opsins are commonly introduced into neurons via viral microinjections . Recently, adeno-associated viruses (AAV) have become favored because of their low immunogenicity, good production titer, and expression efficiency, but especially because they can be manipulated in Bio-Safety Level 1 conditions . The user can control the degree of specificity of the optogenetics manipulation by using different viral characteristics and microinjection parameters . For example, some viral capsids are taken up into cells faster than others, thus modulating the volume of infection; the promoter can allow expression in an exclusive type of cells or tissue, which is termed tropism; the serotype, viral type, and genome can influence the levels of expression, antero- or retrograde transport, and trans -synaptic infection . The viral and microinjection characteristics influence directly the opsin expression, but also the experimental approach. The promoter-serotype combination is the most significant intrinsic factor determining the viral infection spread and pattern, although some variations may occur, especially at extreme titers . For example, chicken β-actin (CBA), its derivative called CAG, and human cytomegalovirus (CMV) are generally considered strong transcription promoters , while CaMKIIα 0.4 constructs specifically infect more than 90% excitatory neurons . The new hybrid vector AAV-DJ combines elements from eight different serotypes to achieve high transduction efficiency . The cortical spread of various recombinant, hybrid AAV serotypes using either the CMV or the CaMKIIα promoters, shows an increasing serotype efficacy of 2/1 << 2/7 ∼ 2/8 ∼ 2/9 < 2/5 (in this terminology, the AAV2 inverted terminal repeat has been cross-packaged in the capsid from the second numbered serotype, ), based on mean expression spread from the injection site . Other researchers have found serotype 2/8 to spread less than 2/9 , but since they share axonal transport mechanisms , a possible explanation would be that the uptake of 2/8 into neurons is faster and thus the virus has less time to spread, possibly due to improved uptake through the plasma membrane. Therefore, if the experiment requires a smaller confined area of opsin expression, then a good option is to inject AAV2/8 with the CMV promoter, as long as the microinjection sites can be positioned less than 1 mm apart from each other and from the center of the cortical module of interest . If the region of interest (ROI) is widespread, forcing the microinjection sites to be too numerous or far away from each other, then 2/5 or 2/9 can be used instead . See an empirical comparison of a narrow spreading expression of AAV2/8-CAG versus a far-spreading AAV2/5-EF1α in the top and bottom panels of , respectively. Since cell tropism and infection efficiency may vary with the location and type of tissue being targeted , the best practice is to test several viral vectors to compare the resulting opsin expressions empirically.
Optical Imaging of Intrinsic Signals is a widely used and easy to implement functional imaging technique with high spatial specificity and resolution . Relatively low-cost hardware is required for implementing OI-IS: a charge-coupled device or CMOS camera with a standard 50 mm – 60 mm lens, and an image acquisition system. The systems to generate the stimuli are required for the main experiment, independent of the OI-IS-based guidance that we propose. There are several facets of an optogenetics experiment that can be improved with this setup. These include the precise guidance of viral microinjections and the recording probe around or into a small functional module, and guiding the positioning of the optic fiber by estimating the region excited by the optogenetics illumination. By switching to a zoom lens, the experimenter can monitor the probe’s contacts at high-magnification, making it possible to control the depth of the electrode insertion. While the current experiments have focused on barrels in mouse area S1BF, OI-IS can be used to localize several distinct cortical areas and modules of interest to deliver optogenetics viruses, photostimulate optogenetically, and record from multiple sites. Given its non-invasive nature, it would also be ideal for reading the long-term chronic-effects of optogenetics stimulation. Finally, OI-IS would be ideal for guiding optogenetics experiments in non-human primates and marmosets. The main sources of error during OI-IS and their troubleshooting have been discussed in detail in the Results section. When used properly, this technique provides reliable, consistent identification of functional modules on the scale of hundreds of microns, to a degree of precision not attainable by using atlas coordinates and/or by trial-and-error of electrode insertion. We have demonstrated the accuracy of the OI-IS-guided insertions by post-experiment histology of the flattened brain. The DiI-marked electrode track is located inside the targeted barrel of interest, as identified by cytochrome oxidase and by counterstaining techniques such as DAPI. Additional immunohistochemistry options are available, such as using NeuN as the counterstain or using specific markers to identify the user’s particular target that would then be co-localized with the DiI from the recording site. Whereas OI-IS requires craniotomy in large animals, it can be performed through the thinned skull in rodents. However, imaging through the skull blurs the images because of the light scattering caused by the bone. When imaging through the skull, a commonly observed issue is spatially blurred images of the pial vessels and hemodynamic response. The first step is to make sure that the lens is focused on the pial vessels. If the blurring persists, additional thinning of the skull may help. To overcome blurring when targeting small functional modules on the scale of hundreds of microns, imaging the cortex following craniotomy can be pursued (compare rows A and B in each of , ). Resecting the dura mater is required for imaging in large animals. In rodents – especially in mice – resecting the dura mater is not a condition for imaging; however, to obtain sharp images, it is recommended to resect the dura mater in rodents too. Therefore, the user can evaluate the tradeoff between the degrees of invasiveness against the spatial precision required for the guidance. Another issue that we have commonly observed is that the activated region is larger than the corresponding anatomical structure , possibly because the stimulus may activate adjacent regions that are connected to the stimulated barrel (e.g., neighboring barrels) . To overcome this issue, the user can ensure a more balanced stimulus, such as reduced total duration, amplitude and/or frequency of whisker deflection . Importantly, a hemodynamic response to stimulating a single module can overlap with neighboring modules. To address this issue, we recommend to image the responses to stimulation of the neighboring modules separately. Then, differential analysis of the different responses can be used to remove the common response and present the spatial contrast, as we as we demonstrate in . Differential analysis eliminates the common response and enhances the visualization of the specific representation of the stimulus/module/barrel of interest . Illuminating at an isosbestic wavelength of 550, 569, or 586 nm measures the total Hb content and, by extension, the CBV . Functional imaging studies indicate that CBV responses co-localize faithfully to sites of increases in neural activity whereas the patterns of changes in deoxygenated blood are most prominent in draining veins . Given the importance of spatial precision in identifying the pre-defined cortical column, we exclusively use green 530 nm illumination for both surface vasculature reference images and OI-IS. While 530 nm illumination shows a clear pattern of the pial vessels, it also provides the best contrast to noise (CNR) ratio. In other words, it gives a clear functional image in a short time frame. This feature is important for using OI-IS for guiding microinjections and insertion of electrodes to small functional modules, because the OI-IS stage has to be short. In addition, green illumination reflects changes in CBV, which show spatial specificity to the site of increased neuronal activity at a level comparable to that obtained by the OI-IS initial deep . A critical aspect of OI-IS is the need of maintaining appropriate anesthesia, as this may influence both the neuronal and hemodynamic responses . It is critical for the quality of the experiment to avoid anesthetics that interfere with the cortical blood flow or with neurovascular coupling . Isoflurane represents a non-optimal choice, as it depresses evoked responses and is a vasodilator at typical regimes . In our mouse experiments, we use a combination of ketamine and xylazine or dexmedetomidine and isoflurane administered at low-percentage .
Optical Imaging of Intrinsic Signals relies solely on intrinsic neurovascular elements and does not require adding an extrinsic indicator of neuronal activity . Thus, it requires no additional injections of viruses for the purpose of imaging , which may damage the cortical module of interest. Membrane-bound dyes such as voltage-sensitive dyes (VSD) used in vivo report voltage changes in neurons at an excellent temporal and spatial resolution . The VSD pharmacological and cytotoxic side effects have recently been alleviated to near-negligible levels, using newer generations of blue dyes and lower dye concentrations . Compared to in vivo VSD imaging, OI-IS is an indirect indicator of neural activity. Nevertheless, while VSD imaging has undeniable advantages for imaging neuronal membrane potentials, OI-IS provides faster functional mapping in space because VSDs require 1–2 h to penetrate cortex and bind to the neurons’ membranes . Obtaining the mapping from OI-IS faster than with VSD is important for OI-based guidance of viral microinjections, because it reduces the time under anesthesia in recovery experiments. Similarly, in acute experiments, OI-IS makes it possible to start the neurophysiological recordings earlier than VSD does, thus reducing effects of accumulated anesthesia during the recordings. In addition, OI-IS is minimally invasive when performed through a thinned skull, whereas VSD imaging requires craniotomy. Although VSD imaging can be successfully combined with optogenetics , it requires a judicious choice of dyes and opsins and a more advanced photostimulation/imaging setup . We posit that for guiding insertions of microinjection pipettes and/or electrodes into fine-scale functional modules, OI-IS is superior to VSD imaging.
For combining OI-IS with optogenetics, it is important to consider the bands of wavelengths for the OI-IS illumination and for exciting the optogenetic opsin. If these distributions overlap considerably, the OI-IS illumination will excite the optogentic opsin, which will, in turn, manipulate the neuronal activity. This is especially important if the OI-IS serves as a readout to quantify the effect of the optogenetic manipulation. However, to prevent undesired effects, it is important to consider the distributions of wavelengths also for using OI-IS to guide the insertion of probes into brains that already carry the optogenetic opsin. Here we imaged relative changes in total hemoglobin using a narrow band of wavelengths centered on 530 nm. The optogenetics opsin we used is ChR2, whose maximum sensitivity is at 466 nm . At 530 nm, ChR2’s sensitivity drops to 21% of the maximum sensitivity. Given that the power used for OI-IS illumination is lower than that used for optogenetics, we expect that the effect of the OI-IS illumination on the neuronal activity through excitation of the optogenetics opsin is diminished. In animal models, opsins are commonly introduced into neurons via viral microinjections . Recently, adeno-associated viruses (AAV) have become favored because of their low immunogenicity, good production titer, and expression efficiency, but especially because they can be manipulated in Bio-Safety Level 1 conditions . The user can control the degree of specificity of the optogenetics manipulation by using different viral characteristics and microinjection parameters . For example, some viral capsids are taken up into cells faster than others, thus modulating the volume of infection; the promoter can allow expression in an exclusive type of cells or tissue, which is termed tropism; the serotype, viral type, and genome can influence the levels of expression, antero- or retrograde transport, and trans -synaptic infection . The viral and microinjection characteristics influence directly the opsin expression, but also the experimental approach. The promoter-serotype combination is the most significant intrinsic factor determining the viral infection spread and pattern, although some variations may occur, especially at extreme titers . For example, chicken β-actin (CBA), its derivative called CAG, and human cytomegalovirus (CMV) are generally considered strong transcription promoters , while CaMKIIα 0.4 constructs specifically infect more than 90% excitatory neurons . The new hybrid vector AAV-DJ combines elements from eight different serotypes to achieve high transduction efficiency . The cortical spread of various recombinant, hybrid AAV serotypes using either the CMV or the CaMKIIα promoters, shows an increasing serotype efficacy of 2/1 << 2/7 ∼ 2/8 ∼ 2/9 < 2/5 (in this terminology, the AAV2 inverted terminal repeat has been cross-packaged in the capsid from the second numbered serotype, ), based on mean expression spread from the injection site . Other researchers have found serotype 2/8 to spread less than 2/9 , but since they share axonal transport mechanisms , a possible explanation would be that the uptake of 2/8 into neurons is faster and thus the virus has less time to spread, possibly due to improved uptake through the plasma membrane. Therefore, if the experiment requires a smaller confined area of opsin expression, then a good option is to inject AAV2/8 with the CMV promoter, as long as the microinjection sites can be positioned less than 1 mm apart from each other and from the center of the cortical module of interest . If the region of interest (ROI) is widespread, forcing the microinjection sites to be too numerous or far away from each other, then 2/5 or 2/9 can be used instead . See an empirical comparison of a narrow spreading expression of AAV2/8-CAG versus a far-spreading AAV2/5-EF1α in the top and bottom panels of , respectively. Since cell tropism and infection efficiency may vary with the location and type of tissue being targeted , the best practice is to test several viral vectors to compare the resulting opsin expressions empirically.
Pursuing optogenetic microinjections or recording neurophysiology from inside a predetermined fine-scale cortical module requires a careful consideration of the experimental parameters, such as viral serotypes and promoter. More importantly, it also requires to precisely map these modules in vivo . The OI-IS-based guidance methodology described in this manuscript makes it possible to insert micropipettes for viral microinjection and neurophysiology electrodes quickly and accurately to their pre-determined functional module. It allows sub-millimeter spatial resolution and minimal overlap of activated modules. It also features a low degree of invasiveness; thus, it is safe for use in long-duration protocols such as microinjections of optogenetic viral vectors in a recovery surgery, followed by a period of several weeks allowing opsin expression, then performing the readout and/or behavioral measurements.
The data supporting the conclusions of this article will be made available upon a request sent by email to the corresponding author. See https://www.mcgill.ca/neuro/amir-shmuel-phd .
This animal study was reviewed and approved by the Animal Care Committee of the Montreal Neurological Institute, McGill University.
VMM designed the study, acquired and analyzed the data, and wrote the manuscript. AS initiated and designed the study, oversaw the data acquisition and analysis, wrote part of the code for the OI-IS data analysis, and wrote the manuscript. Both authors contributed to the article and approved the submitted version.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
|
Correlation of RANK and RANKL with mammographic density in primary breast cancer patients | 69a56916-adf8-44f4-875f-2f1f6e4b691d | 11258178 | Anatomy[mh] | High mammographic density (MD) has been confirmed to modify breast cancer risk depending on the percentage of MD (PMD) with a two–sixfold increased risk . Besides from familial and genetic factors , higher PMD has been linked with the cumulative exposure to growth factors and hormones. This includes a great lifetime number of menstrual cycles by early menarche and late menopause, which is an indicator for cumulative exposure to luteal phase progesterone levels, a low number of parities and life births, adipose body mass index (BMI), combined estrogen-plus-progestin hormone replacement therapy, elevated levels of prolactin, and other factors . PMD reflects the proportion of dense breast tissue comprising epithelial cells, fibroblasts, and connective tissue on a mammogram, whereas adipose tissue is the main component of non-dense breast tissue. Although it has been proposed that stromal architecture and composition of the breast influence epithelial biology and play an initial role in breast carcinogenesis, the molecular mechanisms between PMD and increased breast cancer risk are still not well understood . The receptor activator of nuclear factor kappa B (RANK) and its ligand (RANKL) as well as osteoprotegerin (OPG), functioning as an antagonistic, soluble decoy receptor for RANKL, are expressed by various tissues and cell lines. Besides its role in bone metabolism and osseous metastasis, RANK/RANKL/OPG signaling is also involved in physiological and pathological processes of immune response and proliferation of different tissues including the mammary gland . It has been demonstrated that progesterone and prolactin increase the expression of RANKL in the breast and interact with the RANK pathway, inducing lobulo-alveolar differentiation, proliferation, and expansion of mammary epithelial cells. Inhibition of progesterone, RANK or RANKL resulted in less mammary cell proliferation, carcinogenesis, and metastasis in mouse models . This has been shown especially in models of BRCA1 mutated breast cancer . The monoclonal antibody against RANKL denosumab has proven efficacy in the prevention and treatment of osteoporosis and bone metastases in breast cancer as well as in other types of cancer . In addition, trials with female BRCA mutation carriers are investigating the effect of denosumab on proliferation of the breast epithelium ( BRCA-D, ACTRN12614000694617 ) and as a chemopreventive drug against breast cancer ( BRCA-P, NCT04711109 ). Because of its association with breast proliferation and mammary tumor development, it has been hypothesized that RANK, RANKL, and OPG expression is linked with PMD. This has been investigated by few studies for serum or plasma expression , or expression in healthy breast tissue , but not for breast cancer-specific expression so far. The aim of the present study was thus to assess the correlation of RANK and RANKL expression in primary breast cancer samples with PMD of the contralateral, healthy breast. Patients The Bavarian Breast Cancer Cases and Controls (BBCC) study is a case–control study investigating molecular and epidemiological breast cancer risk factors as well as prognostic and predictive factors including PMD. Between 2000 and 2007, 1538 patients were included who were at least 18 years old and had a diagnosis of invasive breast cancer. Tissue microarrays (TMAs) were constructed from 894 patients. After exclusion of datasets with ineligible characteristics or missing information, the final study population comprised 412 female patients with unilateral invasive breast cancer. The detailed selection process is provided in Fig. . Histopathological, epidemiological and follow-up data Comprehensive data on tumor and patient characteristics as well as follow-up data for a minimum of 10 years after initial diagnosis were documented conforming to the requirements of the German Cancer Society (Deutsche Krebsgesellschaft) and the German Society for Breast Diseases (Deutsche Gesellschaft für Senologie) as part of the certification process . Breast cancer subtypes were defined as previously described . Shortly, HER2 receptor-negative tumors which showed either estrogen receptor (ER) or progesterone receptor (PR) expression (≥ 10%) were classified as luminal A-like for Ki-67 < 15% and grading of 1 or 2 or luminal B-like for Ki-67 ≥ 15% and grading of 2 or 3. HER2 receptor-positive breast cancer was stated in patients with HER2 staining of 3 + as assessed by immunohistochemistry or HER2 gene amplification. Patients with HER2-negative and hormone receptor (HR)-negative or weakly positive (< 10%) breast cancer were considered as triple-negative (TNBC) . Assessment of PMD For the assessment of PMD, mammograms were eligible if they were taken 1 year before or 3 months after breast cancer diagnosis. PMD was measured on the contralateral breast, which was not affected by breast cancer in cranio-caudal (CC) projection. In this work, full-field digital mammograms and film-based mammograms were examined. Analog film-based mammograms were digitized by a CadPro Advantage® film digitizer (VIDAR Systems Corporation, Herndon, Virginia, USA). Breast area measurements and quantitative computer-based threshold density assessments were performed by two individual, experienced readers with special training in the applied method. Mammograms were analyzed in an independent and arbitrary order, and the readers were unaware of any previous findings. Finally, the mean PMD of the two readers was used for analysis. The MD proportion was evaluated using the Madena software program, version 3.26 (Eye Physics, LLC, Los Alamitos, California, USA). This method has been validated and described before , and we used it in several previous works . Assessment of RANK and RANKL expression Tumor specimens were formalin-fixed and paraffin-embedded (FFPE). In a first step, an experienced pathologist marked the tumor areas of interest on a hematoxylin–eosin-stained slide. For the construction of TMAs, cylindric tissue core biopsies (0.8 mm per dot) from multiple sample donor blocks were re-embedded in a second step into a single microarray block at predefined coordinates. Staining of the TMA was performed with anti-human RANK (N-1H8; Amgen, Thousand Oaks, California, USA) or RANKL (M366; Amgen, Thousand Oaks, California, USA) mouse monoclonal antibodies or isotype-matched control mouse IgG, as previously described . For each primary tumor, RANK and RANKL expression was scored according to the semiquantitative histochemical score (H score) . Experienced pathologists conducted the immunohistochemical interpretation blinded to any sample identification. The percentage of RANK and RANKL-positive tumor cells was multiplied by staining intensity, respectively: 0, negative; 1 + , weak; 2 + , moderate; and 3 + , strong. The sum of all calculated tumor cell percentage/intensity product for every TMA dot was defined as H score, ranging from 0–300. As a result, 300 represents 100% of tumor cells having a strong staining intensity. Statistical analysis The primary objective of this analysis was to investigate the association between RANK and RANKL tumor expression, quantified as H score, and PMD. For that purpose, we calculated linear regression models with PMD as outcome. The square root of PMD was used to gain normally distributed residuals of the models. First, a basic model with the following predictors was set up: age at diagnosis (continuous), BMI (continuous), and parity (number of children born, categorical: 0, 1, 2, and ≥ 3). Afterwards, RANK (> 0 vs 0) and RANKL (> 0 vs 0) was added to the basic model to obtain a full model. Due to large proportions of zeroes in RANK and RANKL H scores, those variables were dichotomized in either negative or positive to perform further analyses. The basic and the full model were compared using the F test. A significant result means that RANK or RANKL H score is associated with PMD. As sensitivity analysis we calculated Spearman rank correlations (ρ) between PMD and RANK or RANKL H score and tested their significance. Subjects with missing values in RANK and RANKL H score and PMD were excluded from analysis. Missing values in other predictors were imputed as described in Salmen et al. . The 15 (3.6%) values for BMI were substituted by the median of non-missing data. For the imputation of the 25 (6.1%) values in parity, we calculated a multinomial logistic regression model with the predictors age, BMI, and PMD. All of the tests were two-sided, and P < 0.05 was regarded as statistically significant. Calculations were carried out using the R system for statistical computing (version 3.4.0; R Development Core Team, Vienna, Austria, 2017). The Bavarian Breast Cancer Cases and Controls (BBCC) study is a case–control study investigating molecular and epidemiological breast cancer risk factors as well as prognostic and predictive factors including PMD. Between 2000 and 2007, 1538 patients were included who were at least 18 years old and had a diagnosis of invasive breast cancer. Tissue microarrays (TMAs) were constructed from 894 patients. After exclusion of datasets with ineligible characteristics or missing information, the final study population comprised 412 female patients with unilateral invasive breast cancer. The detailed selection process is provided in Fig. . Comprehensive data on tumor and patient characteristics as well as follow-up data for a minimum of 10 years after initial diagnosis were documented conforming to the requirements of the German Cancer Society (Deutsche Krebsgesellschaft) and the German Society for Breast Diseases (Deutsche Gesellschaft für Senologie) as part of the certification process . Breast cancer subtypes were defined as previously described . Shortly, HER2 receptor-negative tumors which showed either estrogen receptor (ER) or progesterone receptor (PR) expression (≥ 10%) were classified as luminal A-like for Ki-67 < 15% and grading of 1 or 2 or luminal B-like for Ki-67 ≥ 15% and grading of 2 or 3. HER2 receptor-positive breast cancer was stated in patients with HER2 staining of 3 + as assessed by immunohistochemistry or HER2 gene amplification. Patients with HER2-negative and hormone receptor (HR)-negative or weakly positive (< 10%) breast cancer were considered as triple-negative (TNBC) . For the assessment of PMD, mammograms were eligible if they were taken 1 year before or 3 months after breast cancer diagnosis. PMD was measured on the contralateral breast, which was not affected by breast cancer in cranio-caudal (CC) projection. In this work, full-field digital mammograms and film-based mammograms were examined. Analog film-based mammograms were digitized by a CadPro Advantage® film digitizer (VIDAR Systems Corporation, Herndon, Virginia, USA). Breast area measurements and quantitative computer-based threshold density assessments were performed by two individual, experienced readers with special training in the applied method. Mammograms were analyzed in an independent and arbitrary order, and the readers were unaware of any previous findings. Finally, the mean PMD of the two readers was used for analysis. The MD proportion was evaluated using the Madena software program, version 3.26 (Eye Physics, LLC, Los Alamitos, California, USA). This method has been validated and described before , and we used it in several previous works . Tumor specimens were formalin-fixed and paraffin-embedded (FFPE). In a first step, an experienced pathologist marked the tumor areas of interest on a hematoxylin–eosin-stained slide. For the construction of TMAs, cylindric tissue core biopsies (0.8 mm per dot) from multiple sample donor blocks were re-embedded in a second step into a single microarray block at predefined coordinates. Staining of the TMA was performed with anti-human RANK (N-1H8; Amgen, Thousand Oaks, California, USA) or RANKL (M366; Amgen, Thousand Oaks, California, USA) mouse monoclonal antibodies or isotype-matched control mouse IgG, as previously described . For each primary tumor, RANK and RANKL expression was scored according to the semiquantitative histochemical score (H score) . Experienced pathologists conducted the immunohistochemical interpretation blinded to any sample identification. The percentage of RANK and RANKL-positive tumor cells was multiplied by staining intensity, respectively: 0, negative; 1 + , weak; 2 + , moderate; and 3 + , strong. The sum of all calculated tumor cell percentage/intensity product for every TMA dot was defined as H score, ranging from 0–300. As a result, 300 represents 100% of tumor cells having a strong staining intensity. The primary objective of this analysis was to investigate the association between RANK and RANKL tumor expression, quantified as H score, and PMD. For that purpose, we calculated linear regression models with PMD as outcome. The square root of PMD was used to gain normally distributed residuals of the models. First, a basic model with the following predictors was set up: age at diagnosis (continuous), BMI (continuous), and parity (number of children born, categorical: 0, 1, 2, and ≥ 3). Afterwards, RANK (> 0 vs 0) and RANKL (> 0 vs 0) was added to the basic model to obtain a full model. Due to large proportions of zeroes in RANK and RANKL H scores, those variables were dichotomized in either negative or positive to perform further analyses. The basic and the full model were compared using the F test. A significant result means that RANK or RANKL H score is associated with PMD. As sensitivity analysis we calculated Spearman rank correlations (ρ) between PMD and RANK or RANKL H score and tested their significance. Subjects with missing values in RANK and RANKL H score and PMD were excluded from analysis. Missing values in other predictors were imputed as described in Salmen et al. . The 15 (3.6%) values for BMI were substituted by the median of non-missing data. For the imputation of the 25 (6.1%) values in parity, we calculated a multinomial logistic regression model with the predictors age, BMI, and PMD. All of the tests were two-sided, and P < 0.05 was regarded as statistically significant. Calculations were carried out using the R system for statistical computing (version 3.4.0; R Development Core Team, Vienna, Austria, 2017). Patient and tumor characteristics Overall, 412 female patients with primary breast cancer were included in the final analysis. Mean age at breast cancer diagnosis was 58.6 (standard deviation, SD 12.7) years, and median BMI was 25.2 (interquartile range, IQR 22.5–28.6) kg/m 2 . A majority of patients gave birth to two children (40.5%), while a minority was nulliparous (13.6%). Most patients had a pathological tumor size of T1 ( n = 198, 48.1%) or T2 ( n = 166, 40.3%), had no lymph node involvement ( n = 219, 53.2%), and had either luminal A-like ( n = 133, 32.3%) or luminal B-like ( n = 154, 37.4%) tumors (Table ). RANK and RANKL expression and correlation with PMD The median PMD was 0.37 (IQR 0.24–0.53) (Table ). The distribution of PMD is depicted in Fig. . In the majority of the cases, the H score for immunohistochemical assessment of RANK and RANKL was 0, while 143 patients (34.7%) showed an H score > 0 for RANK and 43 patients (10.4%) for RANKL (Table ). The median H score of cases with a positive expression was 50 (IQR 10–100) for RANK and 30 (IQR 8–125) for RANKL. Concerning molecular subtypes, the frequency of positive RANK expression was lowest among patients with luminal A-like breast cancer (19.5%), increasing in those with luminal B-like breast cancer (26.0%), HER2-positive breast cancer (55.2%), and TNBC (67.2%). The median RANK H score among patients with detectable RANK expression increased in the same order across molecular subtypes. No subtype-specific pattern could be seen for RANKL expression (Table ). The distribution of RANK and RANKL H score is presented in Fig. a, b. The linear regression analysis did not show an association of PMD with RANK and RANKL expression assessed by H score (F test, P = 0.68). Furthermore, sensitivity analysis revealed no significant correlation between PMD and RANK (Spearman’s ρ = 0.01, P = 0.87) or RANKL H score (Spearman’s ρ = 0.04, P = 0.41). Scatterplots for PMD and RANK or RANKL H score are shown in Fig. a, b. Overall, 412 female patients with primary breast cancer were included in the final analysis. Mean age at breast cancer diagnosis was 58.6 (standard deviation, SD 12.7) years, and median BMI was 25.2 (interquartile range, IQR 22.5–28.6) kg/m 2 . A majority of patients gave birth to two children (40.5%), while a minority was nulliparous (13.6%). Most patients had a pathological tumor size of T1 ( n = 198, 48.1%) or T2 ( n = 166, 40.3%), had no lymph node involvement ( n = 219, 53.2%), and had either luminal A-like ( n = 133, 32.3%) or luminal B-like ( n = 154, 37.4%) tumors (Table ). The median PMD was 0.37 (IQR 0.24–0.53) (Table ). The distribution of PMD is depicted in Fig. . In the majority of the cases, the H score for immunohistochemical assessment of RANK and RANKL was 0, while 143 patients (34.7%) showed an H score > 0 for RANK and 43 patients (10.4%) for RANKL (Table ). The median H score of cases with a positive expression was 50 (IQR 10–100) for RANK and 30 (IQR 8–125) for RANKL. Concerning molecular subtypes, the frequency of positive RANK expression was lowest among patients with luminal A-like breast cancer (19.5%), increasing in those with luminal B-like breast cancer (26.0%), HER2-positive breast cancer (55.2%), and TNBC (67.2%). The median RANK H score among patients with detectable RANK expression increased in the same order across molecular subtypes. No subtype-specific pattern could be seen for RANKL expression (Table ). The distribution of RANK and RANKL H score is presented in Fig. a, b. The linear regression analysis did not show an association of PMD with RANK and RANKL expression assessed by H score (F test, P = 0.68). Furthermore, sensitivity analysis revealed no significant correlation between PMD and RANK (Spearman’s ρ = 0.01, P = 0.87) or RANKL H score (Spearman’s ρ = 0.04, P = 0.41). Scatterplots for PMD and RANK or RANKL H score are shown in Fig. a, b. In this retrospectively conducted study of 412 female patients with primary breast cancer, we could not find an association of RANK and RANKL expression, as assessed by immunohistochemistry of FFPE tumor tissue samples, with PMD of the contralateral, non-diseased breast. In a recent observational study, we linked soluble RANKL and OPG expression to breast volume changes during pregnancy in healthy women, implicating an impact on breast proliferation . Likewise, different in vitro and in vivo studies revealed a progesterone- and prolactin-driven induction of the RANK/RANKL/OPG pathway, triggering the development, growth, and migration of mammary epithelial cells, and leading to tumorigenesis and metastasis . An analysis of a subcohort of prospectively observed, initially healthy, postmenopausal women of the UKCTOCS study who developed breast cancer 12–24 months after sample collection, showed that high RANKL and progesterone serum levels led to a 5.3-fold increase of breast cancer risk . Few studies also confirmed an inverse relationship of OPG serum levels with breast cancer risk in cohorts of primarily premenopausal patients with a BRCA1/2 mutation (mean age 42 years) as well as in the general population for primarily postmenopausal women (mean age 61 years) , while another investigation did not find an association in general premenopausal women (median age 44 years) . Data on the association of RANK, RANKL, and OPG expression with PMD is limited. An analysis of 365 cancer-free premenopausal women described RANK serum levels to be positively correlated with PMD. The same association was found for RANKL serum levels if progesterone levels were elevated . A second study of 368 postmenopausal women showed that an increase in RANK plasma gene expression led to higher volumetric percent density. Moreover, in patients with very high vs very low PMD, RANKL and surprisingly also OPG plasma gene expression were significantly upregulated, while RANKL and OPG plasma gene expression were not higher in women with heterogeneously dense breasts compared with those with almost entirely fatty breasts . Another report on 43 postmenopausal confirmed higher mean PMD for those with lower serum OPG levels, and no association was identified in 57 premenopausal women . In summary, the few available studies on healthy individuals propose that elevated RANK or RANKL circulating protein levels or plasma gene expression are associated with increased PMD , while data concerning the effect of OPG expression on PMD are inconsistent . Data on the breast tissue expression of RANK, RANKL, and OPG are even rarer. One report demonstrated that in 48 healthy, premenopausal women, increasing RANKL gene expression in non-diseased FFPE breast tissue was associated with greater PMD . In this context, it has to be noted that our study is the first one which investigated the expression of RANK and RANKL in breast cancer tissue with regard to PMD of the contralateral, healthy breast. Generally, it has been shown that tissue expression of RANK and RANKL is increased in healthy breast tissue compared with breast cancer tissue , that tissue expression of RANKL varies with changing levels of sex hormones during the menstrual cycle , and that it is higher in premenopausal than in postmenopausal women . With a mean age of 58.6 years, our study collective represents primarily postmenopausal women, which could contribute to the relatively low tumor expression of RANK and RANKL. Immunohistochemical staining of TMAs in the current study was performed with the same antibodies as has been reported in previous trials . We detected a positive tumor expression of RANK and RANKL in 34.7% and 10.4% of the patients, respectively, and most of these had a low expression. In line with these results, a study on TMAs of 601 breast cancer patients found a positive expression of RANK in 27% and of RANKL in 6% , and another large analysis of TMAs of 2299 breast cancer patients from four independent cohorts (of these 777 patients with ER-negative disease) showed even lower expression of RANK and RANKL in the tumor compartment . In a trial exclusively on TNBC patients, similar expression rates as in our study were identified . Some other breast cancer studies reported higher expression , partly with greater rates for RANK than for RANKL . The differences could be explained by varying specificity of immunohistochemical reagents or methodologies, other scoring systems, and the distribution of patient cohorts, histological subtypes, and clinical stages. In the current study, tumor expression of RANK and RANKL was quantified as H scores. Since expression was low with 65.3% negative cases for RANK and 89.6% negative cases for RANKL, we performed a dichotomization in either negative or positive expression. The cut-off for RANK H score is different to a previously used cut-off of ≥ 8.5, which was identified as optimal for the prediction of pathological complete response and survival in a group of patients who all underwent neoadjuvant chemotherapy . In contrast to this study, we investigated the association of RANK and RANKL expression with PMD in breast cancer patients of whom 55.1% received neoadjuvant or adjuvant chemotherapy and whose breast cancers had more favorable tumor characteristics. In our study, triple-negative and HER2-positive tumors had a greater number of RANK-positives and a stronger RANK expression compared with luminal B-like and luminal A-like tumors, while no subtype-specific expression of RANKL was detected. In line with this finding, other studies correlated tumor expression of RANK predominantly with worse prognostic molecular parameters such as ER-negative, HR-negative, triple-negative, or basal-like breast cancer , higher grading , and higher Ki-67 . Tumor expression of RANKL was associated with HR-positive, luminal A-like, or non-basal-like breast cancer , lower grading , and lower Ki-67 in some studies . One of the strengths of our work is the inclusion of all women with incident breast cancer from clinical routine work regardless of any other criteria, reducing the risk of bias in the selection of patients and of treatment effects on PMD. Patients were recruited from a tertiary referral center in a university hospital and not from a population-based screening facility, which generally detects earlier tumor stages. The semiautomated quantification of MD, with two experienced, independent readers for all images and a mean value for PMD being used, has been validated as a robust method in different previous studies . A limitation is the retrospective nature of the analysis with the potential of missing data. Several cases had to be excluded because of incomplete values in variables of interest such as PMD, RANK, and RANKL H score or due to technical issues of TMA evaluation (e.g., inadequate tumor tissue recognizable or tumor core washed off). Although a limited number of studies has described an association between RANK, RANKL, and OPG expression in serum, plasma, or healthy breast tissue with PMD, our study does not show a correlation between tumor-specific RANK and RANKL expression with PMD in patients with primary breast cancer. Since RANK/RANKL/OPG signaling appears to play a role in the development of breast cancer and since RANKL inhibition may be a novel chemoprevention strategy in women at an increased breast cancer risk, this pathway will remain under investigation of present and future trials. |
The Impact of Comment Slant and Comment Tone on Digital Health Communication Among Polarized Publics: A Web-Based Survey Experiment | 52c7f64c-93a9-42fb-b283-c7a6d9aeac07 | 11607566 | Health Communication[mh] | Background Escalating political polarization is increasingly reflected in public attitudes toward health issues, despite health traditionally being a nonpartisan, science-based sector . This polarization is particularly noticeable in social media discussions, where health-related posts often elicit a spectrum of public responses in the comments section, ranging from support to opposition . Alarmingly, these comments frequently include incivility , such as profanity, name-calling, or shouting . Particularly, social media has become a crucial tool for health communication, allowing health institutions to initiate campaigns and individual users to disseminate these campaign messages . The prevalence of polarization and incivility in the comments accompanying the ubiquitous health campaigns on social media necessitates an understanding of their impact on health-related compliance behavior. This understanding is crucial for guiding public health promotion and enhancing the effectiveness of digital health communication. Previous research has indicated that individuals’ exposure to opposing or uncivil comments in health promotion posts can independently reduce their compliance with the promoted health behaviors . However, few studies have examined how the 2 attributes of comments interact and exert joint effects. Indeed, incivility might reduce the effects of comment slant as individuals may attribute low credibility to the commenters , thereby being less affected by them. The combined influence of comment slant and tone on health-related compliance behavior warrants an examination, as these 2 attributes of comments often occur together . The findings contribute to a nuanced understanding of how social media users’ interaction and active participation, specifically polarized and hostile online discourse, affect digital health practices. Health campaigns typically influence compliance behavior indirectly, rather than directly. The influence of presumed influence (IPI) model provides a theoretical explanation for this indirect effect, suggesting that individuals’ perceived media influence on others drives their compliance with promoted behaviors . Even on social media, where cues (eg, view counts and comments) directly trigger normative influence to affect individuals , the IPI process remains significant for understanding their behavior . Studies primarily suggest that individuals’ perceptions of a message’s influence on others are based on their assumptions about others’ exposure to that message . This other-consciousness perspective highlights the effects of comments as they represent others’ responses to a post. Therefore, it is essential to explore how user comments with varying slants and tones can cultivate individuals’ perceptions of the campaign message’s influence on others and their compliance with health behaviors. In a polarized environment, individuals often have strong prior attitudes. These polarized attitudes can also influence how they perceive the media’s impact on others, an approach known as the self-centered perspective of the IPI process . While studies have started to delve into the effects of the self-centered perspective on the IPI process, whether and how this perspective introduces changes in the current predominant other-consciousness perspective remains unexplored. Notably, individuals may react differently to incivility in comments, showing more tolerance for comments that align with their opinions . It implies that an individual’s polarized attitude can influence not only their perceived influence of the digital health campaign on others but also the effects of comments on their presumed influence of the campaign. By considering the separate and combined effects of the other-consciousness and self-centered perspectives, the dual nature of the IPI model offers a comprehensive understanding of the social psychological process through which people respond to mediated health communication in a polarized and hostile online environment. This study was conducted in September 2020, during the early stage of the COVID-19 pandemic in the United States, when wearing masks to combat COVID-19 was controversial due to the polarized political environment . Although numerous posts on social media advocated mask-wearing as an effective measure against the virus’s spread, comments on these posts were predominantly polarized , and about 1 in 5 of these comments exhibited incivility . Such opinion climates have been related to the public’s noncompliance with COVID-19 mitigation guidelines , which leads to heightened virus transmission and COVID-19–related deaths. This situation provides an appropriate context to examine the impact of digital health promotion in the polarized and hostile digital space. Given that, we conducted a between-subjects experiment with a 2 (comment slant: pro–mask-wearing vs anti–mask-wearing) × 2 (comment tone: civil vs uncivil) design by manipulating comments accompanying a social media post for mask-wearing. Participants’ prior attitude was included as a moderator. Given the proliferation of digital health campaigns and the increasing polarized and hostile opinion climates, public health practitioners can benefit from the findings to boost the effectiveness of digital health communication. Presumed Influence and Health Campaigns The IPI model comprises 3 components: presumed exposure, presumed influence, and the IPI . Presumed exposure refers to individuals’ exposure to media content serving as a foundation for their inference about others’ exposure to the same content. Presumed influence indicates that, in turn, individuals’ presumption of others’ exposure triggers their presumption that the media content will influence those others. Finally, IPI refers to individuals’ alignment of their reactions to the presumed influence on others. To date, discussions on IPI tend to focus on how people accommodate or rectify media messages. Rectifying behavior refers to individuals taking actions to protect others from harmful media effects or to magnify desirable media effects on others . Accommodation reactions are more widely studied in the context of health campaigns, where individuals adapt themselves to the social environment . To assess group or social norms, people often form perceptions about media influence on others and draw conclusions based on these perceived influences . The more individuals believe that others adopt a particular behavior, the more likely they are to think that the behavior is normative. A desire to fit in with the group or social pressure then motivates them to adopt the same behavior . The IPI model has been extensively tested in the context of health communication. The compliance behavior has been examined in the context of condom use, healthy diet, regular exercise, antismoking, excessive drinking, e-cigarette use, and COVID-19 pandemic protective behavior . Other-Consciousness Perspective: Comments and Presumed Influence Overview Social media has served as an integral arena for organizations and individuals to share health information . The commentary feature provided by social media transforms audiences from passive information receivers to active users who interact with these health messages. Comments on a public health campaign message often reflect commenters’ support or opposition to the message, indicating their slants. Comments can express approval of the campaign by presenting supportive views or can be disapproving by presenting challenging views . The slants of comments accompanying a message are likely to affect people’s presumptions of the message’s influence on others . This effect can be explained by the exemplification theory. According to the theory, exemplars refer to the opinions or experiences of a person involved in an issue . Exemplars are concrete and easy to process and remember. Thus, people tend to form judgments and beliefs about an issue based on available exemplars. Comments below a message serve as vivid exemplars of the audience’s opinions on the message. When gauging the influence of a social media message on others, a person may perceive comments below the message as representations of the entire audience’s reaction to the message . Previous studies have found that the slant of comments accompanying social media health campaigns affects individuals’ perceptions of the campaigns’ influence on others. When individuals were exposed to supportive comments below a Facebook post promoting COVID-19 vaccination, they perceived a greater influence of the post on others’ acceptance of COVID-19 vaccination than when exposed to disapproving comments about the post . Similarly, when social media users encounter pro–mask-wearing comments rather than anti–mask-wearing comments below a mask-promoting post, they are likely to perceive more influence of the post on other users’ acceptance of mask-wearing. The perceived influence of such a media message on others may further lead the users to comply with the behavior promoted by the message . Accordingly, the following hypotheses are proposed: Hypothesis 1a: social media users will have weaker intentions to wear masks when exposed to anti–mask-wearing comments below a mask-promoting post than when exposed to pro–mask-wearing comments. Hypothesis 1b: the association between comment slants and intentions to wear masks will be mediated by social media users’ perception of the influence of the mask-promoting post on others. Although the commentary feature facilitates social media users’ expressions of personal opinions, comments are often loaded with incivility. Comments are considered to contain incivility when expressed in an impolite and disrespectful tone . Uncivil comments associated with a message can induce a “nasty effect,” a belief that if comments below a message contain incivility, the message must be bad . Readers tend to believe that the original post, juxtaposed with the uncivil comments, is biased, of low quality, uncivil, and from a noncredible source . Research on the “nasty effect” has also extended the spillover effects of comments’ incivility to audiences’ perception of a media message’s influence on others. Waddell and Bailey found a belief in audiences’ minds that “if others’ comments are uncivil then they must not have been affected by the content.” Uncivil comments reveal conflicts among people with different opinions on an issue, rather than their elaboration and information processing of the issue discussed in the main message. When exposed to uncivil comments rather than civil ones left on a media message, people tend to believe that others reinforce their prior views rather than reading, deliberating, and being influenced by the adjacent media message. Accordingly, social media users exposed to uncivil comments on a mask-promoting post are expected to presume that the post exerts less influence on others’ acceptance of mask-wearing than when exposed to civil comments. The perception of less influence of the post on others, in turn, reduces social media users’ behavioral intention to wear masks. We thus propose the following hypotheses. Hypothesis 2a: social media users will have weaker intentions to wear masks when exposed to uncivil comments below a mask-promoting post than when exposed to civil comments. Hypothesis 2b: the association between comment tone and intentions to wear masks will be mediated by social media users’ perception of the influence of the mask-promoting post on others. Self-Centered Perspective: Polarized Attitudes and Presumed Influence Individuals’ perceptions of a health campaign’s influence on others may be affected by their prior attitudes toward the campaign’s advocacy. This effect can be explained by the “looking-glass perception,” which suggests that people’s social perceptions are often self-centric, and people tend to use their own opinions to estimate those of others . They believe that situational factors are similar between themselves and others. Therefore, they tend to amplify their prior attitudes to their perceived social consensus on related issues . Previous studies have provided support for the idea that presumed influence may be self-centric. For example, the robust causal chain from self-exposure, presumed exposure, to presumed influence was found to result from the order of questions. When the order of questions (self-variable → other variable → presumed influence on others → behavior) was reversed (other variable → self-variable → presumed influence on the self → behavior), the causal chain conflicted with the IPI process . The finding suggests that the self may serve as an anchor for projecting presumed influence on others. Another study found that the more individuals relate themselves to the message and consider it real, the greater they perceive the message to elicit an influence on its audiences . Extrapolating from this self-centered perspective, individuals’ prior attitudes toward a health campaign’s advocacy may predict their estimation of the campaign’s influence on others. People tend to accept information that is consistent with their prior beliefs . When individuals encounter a health message consistent with their prior attitudes, they are more willing to acknowledge that they are influenced by the message and accept its view. In contrast, individuals are more likely to reject the message when they have inconsistent attitudes toward it . Accordingly, individuals with favorable attitudes toward mask-wearing are likely to perceive that others, like themselves, also agree with the mask-promoting message and will be influenced by it. Individuals with unfavorable attitudes toward mask-wearing are likely to believe that others, similar to themselves, reject the message and are immune to it. The perception that the mask-promoting post has affected others, in turn, influences individuals’ behavioral intention to wear masks. In addition to the partial mediating role of presumed influence, the positive association between attitudes and behavioral intentions has been sufficiently addressed . Attitudes toward a health behavior can inspire individuals’ intention to perform the behavior. Thus, the following 2 hypotheses are proposed: Hypothesis 3a: social media users will have weaker intentions to wear masks when they have unfavorable attitudes toward mask-wearing than when they have favorable attitudes. Hypothesis 3b: the association between prior attitudes toward mask-wearing and intentions to wear masks will be partially mediated by social media users’ perception of the influence of the mask-promoting post on others. The Interaction of Social Media Comments and Polarized Attitudes The slant and tone of social media comments below a mask-promoting message are likely to interact and affect social media users’ presumption of the message’s influence and subsequently their health compliance. The content of the message provides important cues that help users form impressions of the senders. Previous studies reveal that encountering uncivil comments under a news article led to negative perceptions and less perceived credibility of the commenters . While source credibility has long been recognized as a key factor in persuasiveness , a lack of credibility among commenters may cause uncivil comments to signal that the message has less influence on others. The reduced presumed influence, in turn, is less likely to drive behavioral change. In other words, social media users’ exposure to civil pro–mask-wearing comments on a mask-wearing post facilitates their perception that the post poses an influence on others’ acceptance of mask-wearing and stimulates their compliance with mask-wearing. In contrast, exposure to uncivil pro–mask-wearing comments on a post is likely to decrease this presumed influence of the post and their intentions to comply with mask-wearing. In addition, social media users’ exposure to civil anti–mask-wearing comments on a mask-wearing post can reduce their presumed influence of the post on others’ acceptance of mask-wearing and their compliance with mask-wearing. Conversely, uncivil anti–mask-wearing comments can offset these negative effects to some extent by maintaining the social media users’ presumed influence and behavioral intention of mask-wearing. The following 2 hypotheses are therefore proposed: Hypothesis 4a: comment tone will moderate the effect of comment slant on social media users’ intentions to wear masks, such that the effect of comment slant on behavioral intention will be stronger when comments are expressed in a civil manner compared to in an uncivil manner. Hypothesis 4b: the interaction effect of comment slant and comment tone on social media users’ intentions to wear masks will be mediated by their perception of the influence of the mask-promoting post on others. It is also likely that there is an interaction between comment slant, comment tone, and prior attitudes. Individuals may have more tolerance for comments that align with their prior attitudes, and they may overlook the incivility and aggressiveness in the comments . This can be explained by the Social Identity Theory , which posits that individuals categorize themselves and others into in groups and out groups based on shared characteristics or beliefs, and this categorization influences their attitudes and behaviors. Individuals may categorize comments into in-group comments (those that align with their prior attitudes) and out-group comments (those that contradict their prior attitudes). They are more likely to favor in-group comments and perceive them in a positive light (ie, less uncivil) than out-group comments, as these comments reinforce their social identity and validate their prior attitudes. Experimental studies have suggested that individuals would rate a comment that supported their prior attitudes as civil, even though it contained incivility. However, they still recognized the incivility in comments that were against their prior attitudes . Thus, comment tone may only function or produce a relatively greater effect on presumed influence and health-related compliance behavior when social media users’ prior attitudes are inconsistent with comment slant. In contrast, when social media users’ prior attitudes are consistent with comment slant, they may ignore the incivility in these comments but perceive it as civil. Therefore, the impact of comment tone on presumed influence and behavioral intention would be discounted or become nonsignificant. We propose the 2 hypotheses below: Hypothesis 5a: there is an interaction among comment slant, comment tone, and prior attitudes on social media users’ intentions to wear masks, such that the influence of incivility will affect the influence of comments that reveal a slant inconsistent with social media user’ prior attitudes on their behavioral intention to wear masks, but it will not affect the influence of comments that reveal a slant consistent with their preexisting attitudes. Hypothesis 5b: the impact of the interaction of comment slant, comment tone, and prior attitudes on social media users’ intentions to wear masks will be mediated by their perception of the influence of the mask-promoting post on others. In summary, this study aims to investigate how social media users’ polarized attitudes toward mask-wearing and their exposure to a mask-promoting post synchronized with user comments, independently or collectively, affect their compliance with mask-wearing. Escalating political polarization is increasingly reflected in public attitudes toward health issues, despite health traditionally being a nonpartisan, science-based sector . This polarization is particularly noticeable in social media discussions, where health-related posts often elicit a spectrum of public responses in the comments section, ranging from support to opposition . Alarmingly, these comments frequently include incivility , such as profanity, name-calling, or shouting . Particularly, social media has become a crucial tool for health communication, allowing health institutions to initiate campaigns and individual users to disseminate these campaign messages . The prevalence of polarization and incivility in the comments accompanying the ubiquitous health campaigns on social media necessitates an understanding of their impact on health-related compliance behavior. This understanding is crucial for guiding public health promotion and enhancing the effectiveness of digital health communication. Previous research has indicated that individuals’ exposure to opposing or uncivil comments in health promotion posts can independently reduce their compliance with the promoted health behaviors . However, few studies have examined how the 2 attributes of comments interact and exert joint effects. Indeed, incivility might reduce the effects of comment slant as individuals may attribute low credibility to the commenters , thereby being less affected by them. The combined influence of comment slant and tone on health-related compliance behavior warrants an examination, as these 2 attributes of comments often occur together . The findings contribute to a nuanced understanding of how social media users’ interaction and active participation, specifically polarized and hostile online discourse, affect digital health practices. Health campaigns typically influence compliance behavior indirectly, rather than directly. The influence of presumed influence (IPI) model provides a theoretical explanation for this indirect effect, suggesting that individuals’ perceived media influence on others drives their compliance with promoted behaviors . Even on social media, where cues (eg, view counts and comments) directly trigger normative influence to affect individuals , the IPI process remains significant for understanding their behavior . Studies primarily suggest that individuals’ perceptions of a message’s influence on others are based on their assumptions about others’ exposure to that message . This other-consciousness perspective highlights the effects of comments as they represent others’ responses to a post. Therefore, it is essential to explore how user comments with varying slants and tones can cultivate individuals’ perceptions of the campaign message’s influence on others and their compliance with health behaviors. In a polarized environment, individuals often have strong prior attitudes. These polarized attitudes can also influence how they perceive the media’s impact on others, an approach known as the self-centered perspective of the IPI process . While studies have started to delve into the effects of the self-centered perspective on the IPI process, whether and how this perspective introduces changes in the current predominant other-consciousness perspective remains unexplored. Notably, individuals may react differently to incivility in comments, showing more tolerance for comments that align with their opinions . It implies that an individual’s polarized attitude can influence not only their perceived influence of the digital health campaign on others but also the effects of comments on their presumed influence of the campaign. By considering the separate and combined effects of the other-consciousness and self-centered perspectives, the dual nature of the IPI model offers a comprehensive understanding of the social psychological process through which people respond to mediated health communication in a polarized and hostile online environment. This study was conducted in September 2020, during the early stage of the COVID-19 pandemic in the United States, when wearing masks to combat COVID-19 was controversial due to the polarized political environment . Although numerous posts on social media advocated mask-wearing as an effective measure against the virus’s spread, comments on these posts were predominantly polarized , and about 1 in 5 of these comments exhibited incivility . Such opinion climates have been related to the public’s noncompliance with COVID-19 mitigation guidelines , which leads to heightened virus transmission and COVID-19–related deaths. This situation provides an appropriate context to examine the impact of digital health promotion in the polarized and hostile digital space. Given that, we conducted a between-subjects experiment with a 2 (comment slant: pro–mask-wearing vs anti–mask-wearing) × 2 (comment tone: civil vs uncivil) design by manipulating comments accompanying a social media post for mask-wearing. Participants’ prior attitude was included as a moderator. Given the proliferation of digital health campaigns and the increasing polarized and hostile opinion climates, public health practitioners can benefit from the findings to boost the effectiveness of digital health communication. The IPI model comprises 3 components: presumed exposure, presumed influence, and the IPI . Presumed exposure refers to individuals’ exposure to media content serving as a foundation for their inference about others’ exposure to the same content. Presumed influence indicates that, in turn, individuals’ presumption of others’ exposure triggers their presumption that the media content will influence those others. Finally, IPI refers to individuals’ alignment of their reactions to the presumed influence on others. To date, discussions on IPI tend to focus on how people accommodate or rectify media messages. Rectifying behavior refers to individuals taking actions to protect others from harmful media effects or to magnify desirable media effects on others . Accommodation reactions are more widely studied in the context of health campaigns, where individuals adapt themselves to the social environment . To assess group or social norms, people often form perceptions about media influence on others and draw conclusions based on these perceived influences . The more individuals believe that others adopt a particular behavior, the more likely they are to think that the behavior is normative. A desire to fit in with the group or social pressure then motivates them to adopt the same behavior . The IPI model has been extensively tested in the context of health communication. The compliance behavior has been examined in the context of condom use, healthy diet, regular exercise, antismoking, excessive drinking, e-cigarette use, and COVID-19 pandemic protective behavior . Overview Social media has served as an integral arena for organizations and individuals to share health information . The commentary feature provided by social media transforms audiences from passive information receivers to active users who interact with these health messages. Comments on a public health campaign message often reflect commenters’ support or opposition to the message, indicating their slants. Comments can express approval of the campaign by presenting supportive views or can be disapproving by presenting challenging views . The slants of comments accompanying a message are likely to affect people’s presumptions of the message’s influence on others . This effect can be explained by the exemplification theory. According to the theory, exemplars refer to the opinions or experiences of a person involved in an issue . Exemplars are concrete and easy to process and remember. Thus, people tend to form judgments and beliefs about an issue based on available exemplars. Comments below a message serve as vivid exemplars of the audience’s opinions on the message. When gauging the influence of a social media message on others, a person may perceive comments below the message as representations of the entire audience’s reaction to the message . Previous studies have found that the slant of comments accompanying social media health campaigns affects individuals’ perceptions of the campaigns’ influence on others. When individuals were exposed to supportive comments below a Facebook post promoting COVID-19 vaccination, they perceived a greater influence of the post on others’ acceptance of COVID-19 vaccination than when exposed to disapproving comments about the post . Similarly, when social media users encounter pro–mask-wearing comments rather than anti–mask-wearing comments below a mask-promoting post, they are likely to perceive more influence of the post on other users’ acceptance of mask-wearing. The perceived influence of such a media message on others may further lead the users to comply with the behavior promoted by the message . Accordingly, the following hypotheses are proposed: Hypothesis 1a: social media users will have weaker intentions to wear masks when exposed to anti–mask-wearing comments below a mask-promoting post than when exposed to pro–mask-wearing comments. Hypothesis 1b: the association between comment slants and intentions to wear masks will be mediated by social media users’ perception of the influence of the mask-promoting post on others. Although the commentary feature facilitates social media users’ expressions of personal opinions, comments are often loaded with incivility. Comments are considered to contain incivility when expressed in an impolite and disrespectful tone . Uncivil comments associated with a message can induce a “nasty effect,” a belief that if comments below a message contain incivility, the message must be bad . Readers tend to believe that the original post, juxtaposed with the uncivil comments, is biased, of low quality, uncivil, and from a noncredible source . Research on the “nasty effect” has also extended the spillover effects of comments’ incivility to audiences’ perception of a media message’s influence on others. Waddell and Bailey found a belief in audiences’ minds that “if others’ comments are uncivil then they must not have been affected by the content.” Uncivil comments reveal conflicts among people with different opinions on an issue, rather than their elaboration and information processing of the issue discussed in the main message. When exposed to uncivil comments rather than civil ones left on a media message, people tend to believe that others reinforce their prior views rather than reading, deliberating, and being influenced by the adjacent media message. Accordingly, social media users exposed to uncivil comments on a mask-promoting post are expected to presume that the post exerts less influence on others’ acceptance of mask-wearing than when exposed to civil comments. The perception of less influence of the post on others, in turn, reduces social media users’ behavioral intention to wear masks. We thus propose the following hypotheses. Hypothesis 2a: social media users will have weaker intentions to wear masks when exposed to uncivil comments below a mask-promoting post than when exposed to civil comments. Hypothesis 2b: the association between comment tone and intentions to wear masks will be mediated by social media users’ perception of the influence of the mask-promoting post on others. Self-Centered Perspective: Polarized Attitudes and Presumed Influence Individuals’ perceptions of a health campaign’s influence on others may be affected by their prior attitudes toward the campaign’s advocacy. This effect can be explained by the “looking-glass perception,” which suggests that people’s social perceptions are often self-centric, and people tend to use their own opinions to estimate those of others . They believe that situational factors are similar between themselves and others. Therefore, they tend to amplify their prior attitudes to their perceived social consensus on related issues . Previous studies have provided support for the idea that presumed influence may be self-centric. For example, the robust causal chain from self-exposure, presumed exposure, to presumed influence was found to result from the order of questions. When the order of questions (self-variable → other variable → presumed influence on others → behavior) was reversed (other variable → self-variable → presumed influence on the self → behavior), the causal chain conflicted with the IPI process . The finding suggests that the self may serve as an anchor for projecting presumed influence on others. Another study found that the more individuals relate themselves to the message and consider it real, the greater they perceive the message to elicit an influence on its audiences . Extrapolating from this self-centered perspective, individuals’ prior attitudes toward a health campaign’s advocacy may predict their estimation of the campaign’s influence on others. People tend to accept information that is consistent with their prior beliefs . When individuals encounter a health message consistent with their prior attitudes, they are more willing to acknowledge that they are influenced by the message and accept its view. In contrast, individuals are more likely to reject the message when they have inconsistent attitudes toward it . Accordingly, individuals with favorable attitudes toward mask-wearing are likely to perceive that others, like themselves, also agree with the mask-promoting message and will be influenced by it. Individuals with unfavorable attitudes toward mask-wearing are likely to believe that others, similar to themselves, reject the message and are immune to it. The perception that the mask-promoting post has affected others, in turn, influences individuals’ behavioral intention to wear masks. In addition to the partial mediating role of presumed influence, the positive association between attitudes and behavioral intentions has been sufficiently addressed . Attitudes toward a health behavior can inspire individuals’ intention to perform the behavior. Thus, the following 2 hypotheses are proposed: Hypothesis 3a: social media users will have weaker intentions to wear masks when they have unfavorable attitudes toward mask-wearing than when they have favorable attitudes. Hypothesis 3b: the association between prior attitudes toward mask-wearing and intentions to wear masks will be partially mediated by social media users’ perception of the influence of the mask-promoting post on others. The Interaction of Social Media Comments and Polarized Attitudes The slant and tone of social media comments below a mask-promoting message are likely to interact and affect social media users’ presumption of the message’s influence and subsequently their health compliance. The content of the message provides important cues that help users form impressions of the senders. Previous studies reveal that encountering uncivil comments under a news article led to negative perceptions and less perceived credibility of the commenters . While source credibility has long been recognized as a key factor in persuasiveness , a lack of credibility among commenters may cause uncivil comments to signal that the message has less influence on others. The reduced presumed influence, in turn, is less likely to drive behavioral change. In other words, social media users’ exposure to civil pro–mask-wearing comments on a mask-wearing post facilitates their perception that the post poses an influence on others’ acceptance of mask-wearing and stimulates their compliance with mask-wearing. In contrast, exposure to uncivil pro–mask-wearing comments on a post is likely to decrease this presumed influence of the post and their intentions to comply with mask-wearing. In addition, social media users’ exposure to civil anti–mask-wearing comments on a mask-wearing post can reduce their presumed influence of the post on others’ acceptance of mask-wearing and their compliance with mask-wearing. Conversely, uncivil anti–mask-wearing comments can offset these negative effects to some extent by maintaining the social media users’ presumed influence and behavioral intention of mask-wearing. The following 2 hypotheses are therefore proposed: Hypothesis 4a: comment tone will moderate the effect of comment slant on social media users’ intentions to wear masks, such that the effect of comment slant on behavioral intention will be stronger when comments are expressed in a civil manner compared to in an uncivil manner. Hypothesis 4b: the interaction effect of comment slant and comment tone on social media users’ intentions to wear masks will be mediated by their perception of the influence of the mask-promoting post on others. It is also likely that there is an interaction between comment slant, comment tone, and prior attitudes. Individuals may have more tolerance for comments that align with their prior attitudes, and they may overlook the incivility and aggressiveness in the comments . This can be explained by the Social Identity Theory , which posits that individuals categorize themselves and others into in groups and out groups based on shared characteristics or beliefs, and this categorization influences their attitudes and behaviors. Individuals may categorize comments into in-group comments (those that align with their prior attitudes) and out-group comments (those that contradict their prior attitudes). They are more likely to favor in-group comments and perceive them in a positive light (ie, less uncivil) than out-group comments, as these comments reinforce their social identity and validate their prior attitudes. Experimental studies have suggested that individuals would rate a comment that supported their prior attitudes as civil, even though it contained incivility. However, they still recognized the incivility in comments that were against their prior attitudes . Thus, comment tone may only function or produce a relatively greater effect on presumed influence and health-related compliance behavior when social media users’ prior attitudes are inconsistent with comment slant. In contrast, when social media users’ prior attitudes are consistent with comment slant, they may ignore the incivility in these comments but perceive it as civil. Therefore, the impact of comment tone on presumed influence and behavioral intention would be discounted or become nonsignificant. We propose the 2 hypotheses below: Hypothesis 5a: there is an interaction among comment slant, comment tone, and prior attitudes on social media users’ intentions to wear masks, such that the influence of incivility will affect the influence of comments that reveal a slant inconsistent with social media user’ prior attitudes on their behavioral intention to wear masks, but it will not affect the influence of comments that reveal a slant consistent with their preexisting attitudes. Hypothesis 5b: the impact of the interaction of comment slant, comment tone, and prior attitudes on social media users’ intentions to wear masks will be mediated by their perception of the influence of the mask-promoting post on others. In summary, this study aims to investigate how social media users’ polarized attitudes toward mask-wearing and their exposure to a mask-promoting post synchronized with user comments, independently or collectively, affect their compliance with mask-wearing. Social media has served as an integral arena for organizations and individuals to share health information . The commentary feature provided by social media transforms audiences from passive information receivers to active users who interact with these health messages. Comments on a public health campaign message often reflect commenters’ support or opposition to the message, indicating their slants. Comments can express approval of the campaign by presenting supportive views or can be disapproving by presenting challenging views . The slants of comments accompanying a message are likely to affect people’s presumptions of the message’s influence on others . This effect can be explained by the exemplification theory. According to the theory, exemplars refer to the opinions or experiences of a person involved in an issue . Exemplars are concrete and easy to process and remember. Thus, people tend to form judgments and beliefs about an issue based on available exemplars. Comments below a message serve as vivid exemplars of the audience’s opinions on the message. When gauging the influence of a social media message on others, a person may perceive comments below the message as representations of the entire audience’s reaction to the message . Previous studies have found that the slant of comments accompanying social media health campaigns affects individuals’ perceptions of the campaigns’ influence on others. When individuals were exposed to supportive comments below a Facebook post promoting COVID-19 vaccination, they perceived a greater influence of the post on others’ acceptance of COVID-19 vaccination than when exposed to disapproving comments about the post . Similarly, when social media users encounter pro–mask-wearing comments rather than anti–mask-wearing comments below a mask-promoting post, they are likely to perceive more influence of the post on other users’ acceptance of mask-wearing. The perceived influence of such a media message on others may further lead the users to comply with the behavior promoted by the message . Accordingly, the following hypotheses are proposed: Hypothesis 1a: social media users will have weaker intentions to wear masks when exposed to anti–mask-wearing comments below a mask-promoting post than when exposed to pro–mask-wearing comments. Hypothesis 1b: the association between comment slants and intentions to wear masks will be mediated by social media users’ perception of the influence of the mask-promoting post on others. Although the commentary feature facilitates social media users’ expressions of personal opinions, comments are often loaded with incivility. Comments are considered to contain incivility when expressed in an impolite and disrespectful tone . Uncivil comments associated with a message can induce a “nasty effect,” a belief that if comments below a message contain incivility, the message must be bad . Readers tend to believe that the original post, juxtaposed with the uncivil comments, is biased, of low quality, uncivil, and from a noncredible source . Research on the “nasty effect” has also extended the spillover effects of comments’ incivility to audiences’ perception of a media message’s influence on others. Waddell and Bailey found a belief in audiences’ minds that “if others’ comments are uncivil then they must not have been affected by the content.” Uncivil comments reveal conflicts among people with different opinions on an issue, rather than their elaboration and information processing of the issue discussed in the main message. When exposed to uncivil comments rather than civil ones left on a media message, people tend to believe that others reinforce their prior views rather than reading, deliberating, and being influenced by the adjacent media message. Accordingly, social media users exposed to uncivil comments on a mask-promoting post are expected to presume that the post exerts less influence on others’ acceptance of mask-wearing than when exposed to civil comments. The perception of less influence of the post on others, in turn, reduces social media users’ behavioral intention to wear masks. We thus propose the following hypotheses. Hypothesis 2a: social media users will have weaker intentions to wear masks when exposed to uncivil comments below a mask-promoting post than when exposed to civil comments. Hypothesis 2b: the association between comment tone and intentions to wear masks will be mediated by social media users’ perception of the influence of the mask-promoting post on others. Individuals’ perceptions of a health campaign’s influence on others may be affected by their prior attitudes toward the campaign’s advocacy. This effect can be explained by the “looking-glass perception,” which suggests that people’s social perceptions are often self-centric, and people tend to use their own opinions to estimate those of others . They believe that situational factors are similar between themselves and others. Therefore, they tend to amplify their prior attitudes to their perceived social consensus on related issues . Previous studies have provided support for the idea that presumed influence may be self-centric. For example, the robust causal chain from self-exposure, presumed exposure, to presumed influence was found to result from the order of questions. When the order of questions (self-variable → other variable → presumed influence on others → behavior) was reversed (other variable → self-variable → presumed influence on the self → behavior), the causal chain conflicted with the IPI process . The finding suggests that the self may serve as an anchor for projecting presumed influence on others. Another study found that the more individuals relate themselves to the message and consider it real, the greater they perceive the message to elicit an influence on its audiences . Extrapolating from this self-centered perspective, individuals’ prior attitudes toward a health campaign’s advocacy may predict their estimation of the campaign’s influence on others. People tend to accept information that is consistent with their prior beliefs . When individuals encounter a health message consistent with their prior attitudes, they are more willing to acknowledge that they are influenced by the message and accept its view. In contrast, individuals are more likely to reject the message when they have inconsistent attitudes toward it . Accordingly, individuals with favorable attitudes toward mask-wearing are likely to perceive that others, like themselves, also agree with the mask-promoting message and will be influenced by it. Individuals with unfavorable attitudes toward mask-wearing are likely to believe that others, similar to themselves, reject the message and are immune to it. The perception that the mask-promoting post has affected others, in turn, influences individuals’ behavioral intention to wear masks. In addition to the partial mediating role of presumed influence, the positive association between attitudes and behavioral intentions has been sufficiently addressed . Attitudes toward a health behavior can inspire individuals’ intention to perform the behavior. Thus, the following 2 hypotheses are proposed: Hypothesis 3a: social media users will have weaker intentions to wear masks when they have unfavorable attitudes toward mask-wearing than when they have favorable attitudes. Hypothesis 3b: the association between prior attitudes toward mask-wearing and intentions to wear masks will be partially mediated by social media users’ perception of the influence of the mask-promoting post on others. The slant and tone of social media comments below a mask-promoting message are likely to interact and affect social media users’ presumption of the message’s influence and subsequently their health compliance. The content of the message provides important cues that help users form impressions of the senders. Previous studies reveal that encountering uncivil comments under a news article led to negative perceptions and less perceived credibility of the commenters . While source credibility has long been recognized as a key factor in persuasiveness , a lack of credibility among commenters may cause uncivil comments to signal that the message has less influence on others. The reduced presumed influence, in turn, is less likely to drive behavioral change. In other words, social media users’ exposure to civil pro–mask-wearing comments on a mask-wearing post facilitates their perception that the post poses an influence on others’ acceptance of mask-wearing and stimulates their compliance with mask-wearing. In contrast, exposure to uncivil pro–mask-wearing comments on a post is likely to decrease this presumed influence of the post and their intentions to comply with mask-wearing. In addition, social media users’ exposure to civil anti–mask-wearing comments on a mask-wearing post can reduce their presumed influence of the post on others’ acceptance of mask-wearing and their compliance with mask-wearing. Conversely, uncivil anti–mask-wearing comments can offset these negative effects to some extent by maintaining the social media users’ presumed influence and behavioral intention of mask-wearing. The following 2 hypotheses are therefore proposed: Hypothesis 4a: comment tone will moderate the effect of comment slant on social media users’ intentions to wear masks, such that the effect of comment slant on behavioral intention will be stronger when comments are expressed in a civil manner compared to in an uncivil manner. Hypothesis 4b: the interaction effect of comment slant and comment tone on social media users’ intentions to wear masks will be mediated by their perception of the influence of the mask-promoting post on others. It is also likely that there is an interaction between comment slant, comment tone, and prior attitudes. Individuals may have more tolerance for comments that align with their prior attitudes, and they may overlook the incivility and aggressiveness in the comments . This can be explained by the Social Identity Theory , which posits that individuals categorize themselves and others into in groups and out groups based on shared characteristics or beliefs, and this categorization influences their attitudes and behaviors. Individuals may categorize comments into in-group comments (those that align with their prior attitudes) and out-group comments (those that contradict their prior attitudes). They are more likely to favor in-group comments and perceive them in a positive light (ie, less uncivil) than out-group comments, as these comments reinforce their social identity and validate their prior attitudes. Experimental studies have suggested that individuals would rate a comment that supported their prior attitudes as civil, even though it contained incivility. However, they still recognized the incivility in comments that were against their prior attitudes . Thus, comment tone may only function or produce a relatively greater effect on presumed influence and health-related compliance behavior when social media users’ prior attitudes are inconsistent with comment slant. In contrast, when social media users’ prior attitudes are consistent with comment slant, they may ignore the incivility in these comments but perceive it as civil. Therefore, the impact of comment tone on presumed influence and behavioral intention would be discounted or become nonsignificant. We propose the 2 hypotheses below: Hypothesis 5a: there is an interaction among comment slant, comment tone, and prior attitudes on social media users’ intentions to wear masks, such that the influence of incivility will affect the influence of comments that reveal a slant inconsistent with social media user’ prior attitudes on their behavioral intention to wear masks, but it will not affect the influence of comments that reveal a slant consistent with their preexisting attitudes. Hypothesis 5b: the impact of the interaction of comment slant, comment tone, and prior attitudes on social media users’ intentions to wear masks will be mediated by their perception of the influence of the mask-promoting post on others. In summary, this study aims to investigate how social media users’ polarized attitudes toward mask-wearing and their exposure to a mask-promoting post synchronized with user comments, independently or collectively, affect their compliance with mask-wearing. Experimental Design The study used a web-based between-subjects survey experiment with a 2 (comment slant: pro–mask-wearing vs anti–mask-wearing) × 2 (comment tone: civil vs uncivil) design. Participants were recruited from Amazon Mechanical Turk (MTurk), a crowdsourcing platform that allows individuals to outsource tasks, including web-based experiment participation, to registered workers . There has been a long methodological discussion about the data quality obtained from MTurk. While some studies criticize the quality of data obtained from this platform, some indicate that MTurk is a feasible platform for online data collection, especially when strict criteria are applied . Therefore, to ensure the quality of our data, we established specific criteria for participant selection (ie, the number of the participants’ approved assignments was >5000, the participants’ approval rating was >95%, and the participants were in the United States). In addition, we incorporated 2 attention checks (ie, select a specific word from the given options). The participation was immediately terminated when participants failed to pass attention checks. Ethical Considerations Participants were recruited from September 29 to October 1, 2020. The study was reviewed and approved by the Human Subjects Ethics Subcommittee of the City University of Hong Kong (2020-55359071) before data collection. In the recruitment announcement posted on MTurk, we informed participants that (1) this study examined their knowledge of and attitudes toward mask-wearing; (2) the participation was fully anonymous, and their self-reported data would be kept confidential; and (3) they could leave the study any time if they wanted. After each participant clicked to agree to a written consent form, which again highlighted these ethical considerations, they continued to participate in the survey. Informed consent was obtained from all participants. Stimuli A mask-promoting post was created and embedded in a fictitious health organization’s Facebook page, as Facebook is widely used by health organizations to promote health initiatives. The post was created based on the guidelines about mask-wearing posted on the official website of the Centers for Disease Control and Prevention in the United States to ensure external validity . It was developed using the standard format of fear appeal commonly used in health communication campaigns. To prevent the post from being perceived as an unintended threat to individuals’ freedom, which could undermine the campaign’s effectiveness, we framed it as a low-threat fear appeal by using mild and polite language to recommend mask-wearing . The content and layouts of the post were kept identical across all conditions. Prior research indicates that exposure to >4 comments does not increase the effect of comment tone . Therefore, we encapsulated 4 comments below the post for each condition. Comment slant was initially created based on actual Facebook users’ expressions on mask-wearing. Across the 2 conditions of comment slant, we matched 2 comments, 1 in each condition, that focused on the same aspects of mask-wearing but expressed opposite opinions and also maintained similar levels of lengths, expression style, and argument strength of the comments. We repeated this procedure for the other comments. This allowed us to generate civil pro–mask-wearing and anti–mask-wearing comments without incurring confounding factors. Comment tone was manipulated by following the definition of incivility by Coe et al . We added incivility to the previously created comments to derive uncivil pro–mask-wearing and anti–mask-wearing comments. The post and examples of comments used as stimuli are presented in . Experimental Procedure The experiment was conducted using the web-based survey software Qualtrics. Before fielding the questionnaire, the survey, including the stimuli and measures, was proofread by 3 native speakers to ensure readability and validity. The technical functionality of the survey platforms and settings was tested by 5 student assistants. The number of items per page and the total pages of the questionnaire distributed were adjusted by Qualtrics based on the devices each participant used, thereby resulting in variations among participants. All eligible participants could access the survey link posted on the MTurk assignment page. After providing consent for participation, participants were first asked to report their prior attitudes toward mask-wearing, social media use frequency, and mask-wearing practices. The randomizer of Qualtrics enabled us to randomly assign each participant to 1 of the 4 experimental conditions. After being exposed to the stimuli, participants were asked to indicate their responses to the variables of interest, provide demographic information, and answer manipulation check questions. Participants were allowed to review and change their answers using a “back” button at any time before submitting their responses. The question regarding participants’ prior attitudes toward masks served as a screening item. Participants were asked to rate the extent to which wearing a mask in public during the COVID-19 pandemic was favorable or unfavorable on a 7-point scale (1=very unfavorable, 4=neither unfavorable nor favorable, and 7=very favorable). Participants were categorized as antimaskers (ie, scores <4) and promaskers (ie, scores >4). As this study focused on the effects of polarized attitudes on presumed influence and compliance behavior, participants with neutral attitudes (ie, scores=4) were directed to the end of the survey. Participants A total of 1501 participants provided consent and started the survey, with 522 (34.78%) participants completing all the questions and being included in the final analysis. The view rate was 84.01% (1503/1789), the participation rate was 99.53% (1496/1503), and the completion rate was 34.78% (522/1501). Upon completion of the study, each participant received a debriefing and an incentive of US $0.72. As each worker on MTurk has a unique ID, a unique visitor is defined by the unique ID, rather than cookies used. We also checked IP addresses to ensure that each participant was a unique site visitor. The survey, as an MTurk task, was displayed only once to each participant to avoid repeated registrations. The CONSORT-EHEALTH (Consolidated Standards of Reporting Trials of Electronic and Mobile Health Applications and Online Telehealth) form and the Checklist for Reporting Results of Internet e-Surveys (CHERRIES) form are presented in and , respectively, for further clarity. Participants in the final sample were aged 21 to 77 (mean 41.58, SD 12.35) years. More than half of them were men (291/522, 55.7%). Of the 522 participants, 246 (47.1%) had completed college as their highest level of education, and 416 (79.7%) identified themselves as White. Most participants (317/522, 60.8%) reported that their annual family income ranged from US$20,000 to $74,999. In terms of political identification, 41.5% (217/522) of the participants identified themselves as Democrats, followed by 40.2% (210/522) as Republicans and 18.2% (95/522) as neither Republicans nor Democrats. Measures The measure of presumed influence was adapted from a previous study . Participants were asked to indicate the extent to which they agreed that the social media post of mask promoting had made other people support mask-wearing in public during the COVID-19 pandemic, using a 7-point scale (1=strongly disagree and 7=strongly agree; mean 4.43, SD 2.00). They were also asked to evaluate whether the post had negatively or positively affected others’ attitudes toward mask-wearing, using a 7-point scale (1=in a very negative manner and 7=in a very positive manner; mean 4.20, SD 1.93). These 2 items were highly correlated and were averaged to form the measure of presumed influence ( r =0.77; P <.001; mean 4.32, SD 1.85). We measured participants’ behavioral intention to wear masks as compliance with health campaigns by adapting the measure used by Dillard and Shen . Participants were asked to estimate the likelihood that they would wear a mask in public in the next week using a 7-point scale, ranging from 1=definitely will not to 7=definitely will (mean 5.61, SD 1.90). Before being exposed to the experimental stimuli, participants were asked to report their social media use frequency and mask-wearing practices in the last week. Responses to the 2 questions were rated on a 5-point scale, where 1 meant never and 5 meant nearly always (social media use frequency: mean 3.79, SD 0.93 and mask-wearing practices: mean 3.29, SD 1.04). Preliminary Statistical Analyses For randomization checks, a series of 1-way ANOVA were conducted to test the differences in continuous variables, and several chi-square analyses were conducted to test the differences in categorical variables across conditions. In addition, participants were categorized into 2 groups (ie, antimaskers and promaskers) based on the screening question. The 1-way ANOVA and chi-square analyses were repeated to test the differences in demographic variables between antimaskers and promaskers. Statistical Analyses for Manipulation Checks After exposure to experimental materials, participants were asked to report whether they had read the comments below the post. Next, participants were asked to indicate the extent to which they thought the comments were favorable to the post using a 7-point scale (1=very unfavorable and 7=very favorable). We used an independent-samples 2-tailed t test to check the difference in perceived slant of comment between participants in the pro–mask-wearing comments condition and those in the anti–mask-wearing comments condition. Descriptive information (ie, mean and SD), t value, dfs, and P value were reported to illustrate the difference. Then, 1-sample t tests were conducted to indicate whether participants’ perceived slant of comment significantly deviated from the midpoint of the scale (ie, 4). We reported t value, dfs, and P value to indicate the difference . Furthermore, participants were asked to rate the degree of comment incivility using a 7-point scale (1=very uncivil and 7=very civil). We used independent-samples t tests to check the difference in perceived civility of comments between participants in the civil comments condition and those in the uncivil comments condition. Descriptive information (ie, mean and SD), t value, dfs, and P value were reported to illustrate the difference. Then, 1-sample t tests were conducted to indicate whether participants’ perceived slants of comment significantly deviated from the midpoint of the scale (ie, 4). We reported t value, df s, and P value to indicate the difference . Statistical Analyses for Hypotheses Testing To test the proposed hypotheses concurrently, we used the PROCESS macro (model 12). The PROCESS macro is a regression path analysis modeling tool used to conduct mediation, moderation, and conditional process analysis; it is widely applied in the fields of social, business, and health sciences . Its model 12 tests moderated mediation models. In this study, behavioral intention to wear masks was included as the dependent variable. Prior attitude (0=anti–mask-wearing and 1=pro–mask-wearing) was entered as the independent variable, and comment slant (0=anti–mask-wearing and 1=pro–mask-wearing) and comment tone (0=uncivil and 1=civil) were included as moderators. Participants’ demographics (ie, age, gender, education, income, race, and political identification), mask-wearing frequency, and social media use frequency were included as covariates. Missing values were replaced by mean scores. We reported the unstandardized coefficient (B), unstandardized SE, P value, and 95% CI, which indicate the effects of participants’ prior attitudes, comments slant, comment tone, and presumed influence on their intention to wear masks. In addition, the effect size, SE, and 95% CI were reported to show the conditional direct and indirect effects of comment slant, comment tone, and prior attitudes on behavioral intention. Statistical Analyses for Sensitivity Analysis Two sensitivity analyses were conducted. First, we calculated attitude extremity by subtracting 4 from the value chosen by promaskers in the screening question and by subtracting the value chosen by antimaskers from 4 (ie, 1=low extremity, 2=medium extremity, and 3=high extremity). We controlled for this variable in sensitivity analysis 1. Second, we added the variables stepwise to the regression models—main effects first and then the interaction terms—to better demonstrate the main effects in sensitivity analysis 2. The study used a web-based between-subjects survey experiment with a 2 (comment slant: pro–mask-wearing vs anti–mask-wearing) × 2 (comment tone: civil vs uncivil) design. Participants were recruited from Amazon Mechanical Turk (MTurk), a crowdsourcing platform that allows individuals to outsource tasks, including web-based experiment participation, to registered workers . There has been a long methodological discussion about the data quality obtained from MTurk. While some studies criticize the quality of data obtained from this platform, some indicate that MTurk is a feasible platform for online data collection, especially when strict criteria are applied . Therefore, to ensure the quality of our data, we established specific criteria for participant selection (ie, the number of the participants’ approved assignments was >5000, the participants’ approval rating was >95%, and the participants were in the United States). In addition, we incorporated 2 attention checks (ie, select a specific word from the given options). The participation was immediately terminated when participants failed to pass attention checks. Participants were recruited from September 29 to October 1, 2020. The study was reviewed and approved by the Human Subjects Ethics Subcommittee of the City University of Hong Kong (2020-55359071) before data collection. In the recruitment announcement posted on MTurk, we informed participants that (1) this study examined their knowledge of and attitudes toward mask-wearing; (2) the participation was fully anonymous, and their self-reported data would be kept confidential; and (3) they could leave the study any time if they wanted. After each participant clicked to agree to a written consent form, which again highlighted these ethical considerations, they continued to participate in the survey. Informed consent was obtained from all participants. A mask-promoting post was created and embedded in a fictitious health organization’s Facebook page, as Facebook is widely used by health organizations to promote health initiatives. The post was created based on the guidelines about mask-wearing posted on the official website of the Centers for Disease Control and Prevention in the United States to ensure external validity . It was developed using the standard format of fear appeal commonly used in health communication campaigns. To prevent the post from being perceived as an unintended threat to individuals’ freedom, which could undermine the campaign’s effectiveness, we framed it as a low-threat fear appeal by using mild and polite language to recommend mask-wearing . The content and layouts of the post were kept identical across all conditions. Prior research indicates that exposure to >4 comments does not increase the effect of comment tone . Therefore, we encapsulated 4 comments below the post for each condition. Comment slant was initially created based on actual Facebook users’ expressions on mask-wearing. Across the 2 conditions of comment slant, we matched 2 comments, 1 in each condition, that focused on the same aspects of mask-wearing but expressed opposite opinions and also maintained similar levels of lengths, expression style, and argument strength of the comments. We repeated this procedure for the other comments. This allowed us to generate civil pro–mask-wearing and anti–mask-wearing comments without incurring confounding factors. Comment tone was manipulated by following the definition of incivility by Coe et al . We added incivility to the previously created comments to derive uncivil pro–mask-wearing and anti–mask-wearing comments. The post and examples of comments used as stimuli are presented in . The experiment was conducted using the web-based survey software Qualtrics. Before fielding the questionnaire, the survey, including the stimuli and measures, was proofread by 3 native speakers to ensure readability and validity. The technical functionality of the survey platforms and settings was tested by 5 student assistants. The number of items per page and the total pages of the questionnaire distributed were adjusted by Qualtrics based on the devices each participant used, thereby resulting in variations among participants. All eligible participants could access the survey link posted on the MTurk assignment page. After providing consent for participation, participants were first asked to report their prior attitudes toward mask-wearing, social media use frequency, and mask-wearing practices. The randomizer of Qualtrics enabled us to randomly assign each participant to 1 of the 4 experimental conditions. After being exposed to the stimuli, participants were asked to indicate their responses to the variables of interest, provide demographic information, and answer manipulation check questions. Participants were allowed to review and change their answers using a “back” button at any time before submitting their responses. The question regarding participants’ prior attitudes toward masks served as a screening item. Participants were asked to rate the extent to which wearing a mask in public during the COVID-19 pandemic was favorable or unfavorable on a 7-point scale (1=very unfavorable, 4=neither unfavorable nor favorable, and 7=very favorable). Participants were categorized as antimaskers (ie, scores <4) and promaskers (ie, scores >4). As this study focused on the effects of polarized attitudes on presumed influence and compliance behavior, participants with neutral attitudes (ie, scores=4) were directed to the end of the survey. A total of 1501 participants provided consent and started the survey, with 522 (34.78%) participants completing all the questions and being included in the final analysis. The view rate was 84.01% (1503/1789), the participation rate was 99.53% (1496/1503), and the completion rate was 34.78% (522/1501). Upon completion of the study, each participant received a debriefing and an incentive of US $0.72. As each worker on MTurk has a unique ID, a unique visitor is defined by the unique ID, rather than cookies used. We also checked IP addresses to ensure that each participant was a unique site visitor. The survey, as an MTurk task, was displayed only once to each participant to avoid repeated registrations. The CONSORT-EHEALTH (Consolidated Standards of Reporting Trials of Electronic and Mobile Health Applications and Online Telehealth) form and the Checklist for Reporting Results of Internet e-Surveys (CHERRIES) form are presented in and , respectively, for further clarity. Participants in the final sample were aged 21 to 77 (mean 41.58, SD 12.35) years. More than half of them were men (291/522, 55.7%). Of the 522 participants, 246 (47.1%) had completed college as their highest level of education, and 416 (79.7%) identified themselves as White. Most participants (317/522, 60.8%) reported that their annual family income ranged from US$20,000 to $74,999. In terms of political identification, 41.5% (217/522) of the participants identified themselves as Democrats, followed by 40.2% (210/522) as Republicans and 18.2% (95/522) as neither Republicans nor Democrats. The measure of presumed influence was adapted from a previous study . Participants were asked to indicate the extent to which they agreed that the social media post of mask promoting had made other people support mask-wearing in public during the COVID-19 pandemic, using a 7-point scale (1=strongly disagree and 7=strongly agree; mean 4.43, SD 2.00). They were also asked to evaluate whether the post had negatively or positively affected others’ attitudes toward mask-wearing, using a 7-point scale (1=in a very negative manner and 7=in a very positive manner; mean 4.20, SD 1.93). These 2 items were highly correlated and were averaged to form the measure of presumed influence ( r =0.77; P <.001; mean 4.32, SD 1.85). We measured participants’ behavioral intention to wear masks as compliance with health campaigns by adapting the measure used by Dillard and Shen . Participants were asked to estimate the likelihood that they would wear a mask in public in the next week using a 7-point scale, ranging from 1=definitely will not to 7=definitely will (mean 5.61, SD 1.90). Before being exposed to the experimental stimuli, participants were asked to report their social media use frequency and mask-wearing practices in the last week. Responses to the 2 questions were rated on a 5-point scale, where 1 meant never and 5 meant nearly always (social media use frequency: mean 3.79, SD 0.93 and mask-wearing practices: mean 3.29, SD 1.04). For randomization checks, a series of 1-way ANOVA were conducted to test the differences in continuous variables, and several chi-square analyses were conducted to test the differences in categorical variables across conditions. In addition, participants were categorized into 2 groups (ie, antimaskers and promaskers) based on the screening question. The 1-way ANOVA and chi-square analyses were repeated to test the differences in demographic variables between antimaskers and promaskers. After exposure to experimental materials, participants were asked to report whether they had read the comments below the post. Next, participants were asked to indicate the extent to which they thought the comments were favorable to the post using a 7-point scale (1=very unfavorable and 7=very favorable). We used an independent-samples 2-tailed t test to check the difference in perceived slant of comment between participants in the pro–mask-wearing comments condition and those in the anti–mask-wearing comments condition. Descriptive information (ie, mean and SD), t value, dfs, and P value were reported to illustrate the difference. Then, 1-sample t tests were conducted to indicate whether participants’ perceived slant of comment significantly deviated from the midpoint of the scale (ie, 4). We reported t value, dfs, and P value to indicate the difference . Furthermore, participants were asked to rate the degree of comment incivility using a 7-point scale (1=very uncivil and 7=very civil). We used independent-samples t tests to check the difference in perceived civility of comments between participants in the civil comments condition and those in the uncivil comments condition. Descriptive information (ie, mean and SD), t value, dfs, and P value were reported to illustrate the difference. Then, 1-sample t tests were conducted to indicate whether participants’ perceived slants of comment significantly deviated from the midpoint of the scale (ie, 4). We reported t value, df s, and P value to indicate the difference . To test the proposed hypotheses concurrently, we used the PROCESS macro (model 12). The PROCESS macro is a regression path analysis modeling tool used to conduct mediation, moderation, and conditional process analysis; it is widely applied in the fields of social, business, and health sciences . Its model 12 tests moderated mediation models. In this study, behavioral intention to wear masks was included as the dependent variable. Prior attitude (0=anti–mask-wearing and 1=pro–mask-wearing) was entered as the independent variable, and comment slant (0=anti–mask-wearing and 1=pro–mask-wearing) and comment tone (0=uncivil and 1=civil) were included as moderators. Participants’ demographics (ie, age, gender, education, income, race, and political identification), mask-wearing frequency, and social media use frequency were included as covariates. Missing values were replaced by mean scores. We reported the unstandardized coefficient (B), unstandardized SE, P value, and 95% CI, which indicate the effects of participants’ prior attitudes, comments slant, comment tone, and presumed influence on their intention to wear masks. In addition, the effect size, SE, and 95% CI were reported to show the conditional direct and indirect effects of comment slant, comment tone, and prior attitudes on behavioral intention. Two sensitivity analyses were conducted. First, we calculated attitude extremity by subtracting 4 from the value chosen by promaskers in the screening question and by subtracting the value chosen by antimaskers from 4 (ie, 1=low extremity, 2=medium extremity, and 3=high extremity). We controlled for this variable in sensitivity analysis 1. Second, we added the variables stepwise to the regression models—main effects first and then the interaction terms—to better demonstrate the main effects in sensitivity analysis 2. Preliminary Analyses A CONSORT (Consolidated Standards of Reporting Trials) flow diagram for the participants is presented in . The demographic information across groups is presented in . A series of 1-way ANOVAs indicated that there were no significant differences in participants’ age ( P =.77), education ( P =.37), and annual income ( P =.54) across conditions. Chi-square analyses also showed no significant differences in participants’ gender ( P =.42), race ( P =.97), and political identification ( P =.21) across conditions. In addition, among 522 participants, 269 (51.5%) had unfavorable attitudes toward mask-wearing (ie, antimaskers), whereas 253 (48.5%) held favorable attitudes toward mask-wearing (ie, promaskers). No significant differences in age ( P =.91), gender ( P =.91), and annual income ( P =.51) were found between antimaskers and promaskers. However, promaskers (mean 5.62, SD 1.04) reported higher levels of education than antimaskers (mean 5.32, SD 1.19; t 517.16 =3.05; P =.002). Therefore, basic demographic factors were controlled in later analysis to adjust for the differences in the sample. Manipulation Checks Those who reported not reading comments were excluded (11/539, 2%). Participants in the pro–mask-wearing comments condition considered the comments to be more favorable to the post (mean 5.50, SD 1.85) than those in the anti–mask-wearing comments condition (mean 1.70, SD 1.51; t 499.56 =25.75; P <.001). One-sample t tests indicated that both participants exposed to pro–mask-wearing comments (t 260 =13.10; P <.001) or anti–mask-wearing comments (t 260 =24.68; P <.001) perceived the comments to significantly deviate from the midpoint of the scale (ie, 4). Next, participants in the civil comments condition considered the comments more civil (mean 4.28, SD 1.88) than those in the uncivil condition (mean 1.92, SD 1.56; t 500.29 =15.53; P <.001). One-sample t tests showed that both participants exposed to civil comments (t 258 =2.38; P =.02) and uncivil comments (t 262 =21.53; P <.001) perceived the comment tone to significantly deviate from the midpoint of 4. Hypotheses Testing The results of hypotheses testing for the separate and combined effects of comment slant, comment tone, and prior attitudes on presumed influence and mask-wearing intention are reported in . The results of bootstrapping for the conditional direct and indirect effects of comment slant, comment tone, and prior attitudes on behavioral intention to wear masks are summarized in . As for hypotheses 1a and 1b, the regression results in showed that there was no significant association between comment slant and behavioral intention (B=–0.06; P =.74). Hence, hypothesis 1a was not supported. Nevertheless, we found that compared with anti–mask-wearing comments, pro–mask-wearing comments were found to increase presumed influence (B=1.49; P <.001), and this presumed influence was positively associated with participants’ behavioral intention to wear masks (B=0.07; P =.03). The bootstrapping results showed that the direct effect of comment slant on behavioral intention was significant only among antimaskers who read civil comments (B=0.73, SE 0.79; 95% CI 0.35-1.10). Comment slant posed an indirect influence on behavioral intention through the mediation of presumed influence, regardless of participants’ prior attitudes or comment tone . Hence, hypothesis 1b was supported by the data. Next, for hypotheses 2a and 2b, results in showed that there was a significant but negative association between comment tone and behavioral intention to wear masks (B=–0.44; P =.02). Hence, hypothesis 2a was not supported. As for hypothesis 2b, the effect of comment tone on presumed influence was positively significant (B=0.63; P =.02), and the association between presumed influence and intention to wear masks was also positively significant (B=0.07; P =.03). The bootstrapping results showed that comment tone posed a direct influence on behavioral intention to wear masks only when antimaskers encountered anti–mask-wearing comments (B=–0.44, SE 0.19; 95% CI –0.81 to –0.07). In addition, comment tone posed an influence on behavioral intention via the mediating effects of presumed influence when the comments were pro–mask-wearing, regardless of participants’ prior attitudes . Hence, hypothesis 2b was partially supported. As for hypotheses 3a and 3b, the results in showed that the direct effect of prior attitudes on behavioral intention to wear masks was significant (B=0.86; P <.001). Hence, hypothesis 3a received support. However, the effect of prior attitudes on presumed influence was not significant (B=0.19; P =.51). The bootstrapping results indicated that as long as the comments were uncivil or anti–mask-wearing, participants’ prior attitudes were directly associated with their behavioral intention. Only when comments were pro–mask-wearing and civil, prior attitudes affected behavioral intention through presumed influence. Hence, we mostly could not corroborate hypothesis 3b. Regarding hypothesis 4a, shows that the interaction had a significant and direct effect on behavioral intention (B=0.79; P =.003). As shown in , when expressed in a civil way, opposing comments (mean 5.46, SD 0.10) decreased the mask-wearing intention than supporting comments (mean 5.70, SD 0.10); while when expressed in an uncivil way, the effects of comments slant was reversed such that supporting comments (mean 5.59, SD 0.10) decreased the mask-wearing intention, compared with opposing comments (mean 5.73, SD 0.10). Hence, hypothesis 4a was supported. For hypothesis 4b, the results showed that the interaction between comment slant and comment tone did not significantly predict presumed influence (B=0.09; P =.80). Hence, hypothesis 4b was not supported. For hypotheses 5a and 5b, the results showed that the interaction was significant for behavioral intention (B=–0.84; P =.03) but not for presumed influence (B=0.55; P =.30). However, the effect of the interaction on behavioral intention was different from what we expected. As shown in , for promaskers, behavioral intention to wear masks remained similar when they saw uncivil comments or civil comments, regardless of whether the comments were anti–mask-wearing (civil: mean 6.06, SD 0.15 and uncivil: mean 6.16, SD 0.14; P =.61) or pro–mask-wearing (civil: mean 5.80, SD 0.15 and uncivil: mean 5.95, SD 0.14; P =.45). In contrast, among antimaskers, their behavioral intention remained similar when they viewed uncivil pro–mask-wearing comments (mean 5.24, SD 0.13) and civil pro–mask-wearing comments (mean 5.59, SD 0.14; P =.06). Nevertheless, their behavioral intention was stronger when they read uncivil anti–mask-wearing comments (mean 5.31, SD 0.15) compared to civil anti–mask-wearing comments (mean 4.87, SD 0.14; P =.02). Hence, hypothesis 5a was partially supported, but hypothesis 5b was not supported. Sensitivity Analyses Results from the sensitivity analyses, which included attitude extremity as an additional control variable and involved applying stepwise multiple linear regression ( -8) were consistent with the main results. A CONSORT (Consolidated Standards of Reporting Trials) flow diagram for the participants is presented in . The demographic information across groups is presented in . A series of 1-way ANOVAs indicated that there were no significant differences in participants’ age ( P =.77), education ( P =.37), and annual income ( P =.54) across conditions. Chi-square analyses also showed no significant differences in participants’ gender ( P =.42), race ( P =.97), and political identification ( P =.21) across conditions. In addition, among 522 participants, 269 (51.5%) had unfavorable attitudes toward mask-wearing (ie, antimaskers), whereas 253 (48.5%) held favorable attitudes toward mask-wearing (ie, promaskers). No significant differences in age ( P =.91), gender ( P =.91), and annual income ( P =.51) were found between antimaskers and promaskers. However, promaskers (mean 5.62, SD 1.04) reported higher levels of education than antimaskers (mean 5.32, SD 1.19; t 517.16 =3.05; P =.002). Therefore, basic demographic factors were controlled in later analysis to adjust for the differences in the sample. Those who reported not reading comments were excluded (11/539, 2%). Participants in the pro–mask-wearing comments condition considered the comments to be more favorable to the post (mean 5.50, SD 1.85) than those in the anti–mask-wearing comments condition (mean 1.70, SD 1.51; t 499.56 =25.75; P <.001). One-sample t tests indicated that both participants exposed to pro–mask-wearing comments (t 260 =13.10; P <.001) or anti–mask-wearing comments (t 260 =24.68; P <.001) perceived the comments to significantly deviate from the midpoint of the scale (ie, 4). Next, participants in the civil comments condition considered the comments more civil (mean 4.28, SD 1.88) than those in the uncivil condition (mean 1.92, SD 1.56; t 500.29 =15.53; P <.001). One-sample t tests showed that both participants exposed to civil comments (t 258 =2.38; P =.02) and uncivil comments (t 262 =21.53; P <.001) perceived the comment tone to significantly deviate from the midpoint of 4. The results of hypotheses testing for the separate and combined effects of comment slant, comment tone, and prior attitudes on presumed influence and mask-wearing intention are reported in . The results of bootstrapping for the conditional direct and indirect effects of comment slant, comment tone, and prior attitudes on behavioral intention to wear masks are summarized in . As for hypotheses 1a and 1b, the regression results in showed that there was no significant association between comment slant and behavioral intention (B=–0.06; P =.74). Hence, hypothesis 1a was not supported. Nevertheless, we found that compared with anti–mask-wearing comments, pro–mask-wearing comments were found to increase presumed influence (B=1.49; P <.001), and this presumed influence was positively associated with participants’ behavioral intention to wear masks (B=0.07; P =.03). The bootstrapping results showed that the direct effect of comment slant on behavioral intention was significant only among antimaskers who read civil comments (B=0.73, SE 0.79; 95% CI 0.35-1.10). Comment slant posed an indirect influence on behavioral intention through the mediation of presumed influence, regardless of participants’ prior attitudes or comment tone . Hence, hypothesis 1b was supported by the data. Next, for hypotheses 2a and 2b, results in showed that there was a significant but negative association between comment tone and behavioral intention to wear masks (B=–0.44; P =.02). Hence, hypothesis 2a was not supported. As for hypothesis 2b, the effect of comment tone on presumed influence was positively significant (B=0.63; P =.02), and the association between presumed influence and intention to wear masks was also positively significant (B=0.07; P =.03). The bootstrapping results showed that comment tone posed a direct influence on behavioral intention to wear masks only when antimaskers encountered anti–mask-wearing comments (B=–0.44, SE 0.19; 95% CI –0.81 to –0.07). In addition, comment tone posed an influence on behavioral intention via the mediating effects of presumed influence when the comments were pro–mask-wearing, regardless of participants’ prior attitudes . Hence, hypothesis 2b was partially supported. As for hypotheses 3a and 3b, the results in showed that the direct effect of prior attitudes on behavioral intention to wear masks was significant (B=0.86; P <.001). Hence, hypothesis 3a received support. However, the effect of prior attitudes on presumed influence was not significant (B=0.19; P =.51). The bootstrapping results indicated that as long as the comments were uncivil or anti–mask-wearing, participants’ prior attitudes were directly associated with their behavioral intention. Only when comments were pro–mask-wearing and civil, prior attitudes affected behavioral intention through presumed influence. Hence, we mostly could not corroborate hypothesis 3b. Regarding hypothesis 4a, shows that the interaction had a significant and direct effect on behavioral intention (B=0.79; P =.003). As shown in , when expressed in a civil way, opposing comments (mean 5.46, SD 0.10) decreased the mask-wearing intention than supporting comments (mean 5.70, SD 0.10); while when expressed in an uncivil way, the effects of comments slant was reversed such that supporting comments (mean 5.59, SD 0.10) decreased the mask-wearing intention, compared with opposing comments (mean 5.73, SD 0.10). Hence, hypothesis 4a was supported. For hypothesis 4b, the results showed that the interaction between comment slant and comment tone did not significantly predict presumed influence (B=0.09; P =.80). Hence, hypothesis 4b was not supported. For hypotheses 5a and 5b, the results showed that the interaction was significant for behavioral intention (B=–0.84; P =.03) but not for presumed influence (B=0.55; P =.30). However, the effect of the interaction on behavioral intention was different from what we expected. As shown in , for promaskers, behavioral intention to wear masks remained similar when they saw uncivil comments or civil comments, regardless of whether the comments were anti–mask-wearing (civil: mean 6.06, SD 0.15 and uncivil: mean 6.16, SD 0.14; P =.61) or pro–mask-wearing (civil: mean 5.80, SD 0.15 and uncivil: mean 5.95, SD 0.14; P =.45). In contrast, among antimaskers, their behavioral intention remained similar when they viewed uncivil pro–mask-wearing comments (mean 5.24, SD 0.13) and civil pro–mask-wearing comments (mean 5.59, SD 0.14; P =.06). Nevertheless, their behavioral intention was stronger when they read uncivil anti–mask-wearing comments (mean 5.31, SD 0.15) compared to civil anti–mask-wearing comments (mean 4.87, SD 0.14; P =.02). Hence, hypothesis 5a was partially supported, but hypothesis 5b was not supported. Results from the sensitivity analyses, which included attitude extremity as an additional control variable and involved applying stepwise multiple linear regression ( -8) were consistent with the main results. Principal Findings This study investigated how polarized and hostile user comments below a health campaign message on social media and social media users’ polarized attitudes concurrently affected their perception of the campaign’s influence on others and their compliance with the promoted health behavior. Results showed that compared with anti–mask-wearing comments, pro–mask-wearing comments enhanced presumed influence and health compliance of mask-wearing, but incivility in the comments hindered the positive impact of pro–mask-wearing comments. Antimaskers demonstrated increased compliance when they were unable to find civil support for their opinion in the social media environment. The summary of the research hypotheses and corresponding results are presented in . Results and Comparison With Prior Work First, comment slant remained a cornerstone driving individuals’ presumed influence of the mask-promoting post and their compliance with mask-wearing. Compared to pro–mask-wearing comments, anti–mask-wearing comments always reduced participants’ presumed influence of mask-wearing posts, which further weakened their behavioral intention to wear masks, regardless of comment tone and their prior attitudes toward mask-wearing. These findings suggest that comments can serve as a source of misleading information. Although attitudinal consensus is inferred from the comments left by anonymous and limited others, these comments may lead social media users to develop inaccurate beliefs that the comments reflect public opinion from people in general. These beliefs may influence their health-related compliance behaviors. In addition, incivility affected the presumed influence of a health message, but only when the comments below the message expressed supportive opinions. It is likely that pro–mask-wearing comments below the health message signal the presumed influence of the message on others’ acceptance, and incivility acts as a negative cue that hinders the exemplification effect and indicates that the highly homogeneous and consistent opinion environment depicted in the comments may not be accurate. In contrast, anti–mask-wearing comments have explicitly represented others’ resistance to the main health message, and the presence of incivility only signals a similar cue of others’ resistance. These effects of comment slant and comment tone advance the other-consciousness perspective of the IPI process in the context of digital health campaigns. Individuals’ perception of others’ reception of a media message is influenced by affordances offered by social media. Even when information on source credibility or audience size is absent, the presumed influence of social media messages still adjusts individuals’ compliance behavior accordingly. Social media users take the roles of both content producers and commenters. The opinion environment is highly prone to produce and spread misleading information due to the lack of professional gatekeepers and polarized opinion climate. The IPI model demonstrates a psychological process through which individuals’ exposure to health information and relevant discussions on social media affects their compliance with promoted health behaviors. Therefore, it is crucial to consider both the direct and indirect effects of social media comments below health-persuasive messages on public health outcomes when examining the persuasiveness of digital health communication. With concrete clues about others’ reactions to health persuasion obtained from comments, individuals no longer rely solely on their prior attitudes to infer the influence of a health campaign on others. These findings somewhat challenge the self-centric perspective of IPI. This change can be explained by the evolving media landscape. Previous studies support the self-centric perspective of IPI in the context of traditional media, where traditional media audiences have limited access to others’ reactions to a message and are compelled to rely on their prior attitudes for inference. In social media, users can directly see others’ reactions to a message. They no longer need to fully rely on personal attitudes to infer media influence on others. Comments serve as crucial sources for them to infer the influence of social media posts on others. Only when approving and civil comments are present, prior attitudes can affect behavioral intentions through individuals’ presumed media influence on others. One possible explanation is that individuals in general are subject to negative bias, that is, they are particularly susceptible to information that contains negativity or risks. Anti–mask-wearing comments or incivility impressed and influenced participants because these comments might exaggerate the negative side of mask-wearing and demonstrate hostility among commenters. Therefore, individuals’ perception of others’ reactions to the main message is believed to be influenced by negative cues rather than prior attitudes. Civil pro–mask-wearing comments suggested no cues of negativity, and individuals then relied on their prior attitudes to infer the perception of the post’s influence on others. In most cases, favorable prior attitudes toward mask-wearing directly enhance individuals’ behavioral intention compared to unfavorable prior attitudes. Nevertheless, the influence of prior attitudes on behavioral intentions can be altered by social media comments ensuing digital health communication. Specifically, civil pro–mask-wearing comments directly enhance antimaskers’ behavioral intention to wear masks more than uncivil pro–mask-wearing comments, whereas uncivil anti–mask-wearing comments turn out to enhance antimaskers’ behavioral intention to wear masks more than civil anti–mask-wearing comments. An explanation is that individuals may psychologically dissociate themselves from a group whose members belong to a relatively inferior group ; incivility is seen as impolite and undesirable, and individuals may avoid belonging to a group whose members are rude and uncivil. The findings indicate that individuals engage in biased information processing only when they find civil support for their prior opinions, regardless of whether the support is narrated in the main message or in the comments. Relatedly, while we suspected that antimask attitudes differing from the post advocacy would be associated with less presumed influence, there is a possibility that opponents of mask-wearing may adhere to conspiracy theories. Such individuals might suspect that everyone around them has been brainwashed by governmental health campaigns, thereby leading to very high presumed influence. In other words, there might be a curvilinear relationship between prior attitudes and presumed influence or a linear relationship between attitude strength and presumed influence among antimaskers. Therefore, we conducted additional tests and found that these possibilities were not supported by our data. These findings suggest that in the era of new media, where user responses to health campaigns are publicly visible, judgments about the presumed influence of a post rely more on these visible examples than on personal prior attitudes. Limitations and Future Directions This study has several limitations that should be acknowledged. First, we edited the comments to maintain consistent argument strength across conditions, and therefore, the level of perceived authenticity in the comments may differ. Furthermore, we used default Facebook avatars in the experimental stimuli. Uncivil social media comments coupled with default avatars may be regarded as bot accounts, given the heavily politicized discussion on mask-wearing in the United States. The perception of commenters as bots may affect the presumed influence accordingly. These two aspects suggest that the perceived unrealism of the stimuli, particularly the user comments created in this study, may reduce the validity of the findings. Given that, future studies would benefit from measuring the perceived realism of comments and controlling it as a covariate in the analyses. Second, this study focuses on the effects of user comments and prior attitudes, leaving the main effectiveness of the health campaign post unexamined. Likewise, the interaction effects between the post and its accompanying comments on polarized publics’ presumed influence and behavioral intentions remain unexplored. The combined effects of comment slant and comment tone may vary depending on the post presented together with the comments. The lack of examination of the interplay between comments and the post may hinder a nuanced understanding of the combined effects of digital information. Future studies are encouraged to consider the effectiveness of a post and its interaction with comments. Third, participants were required to read the post and accompanying comments, which may not reflect real-life scenarios where individuals may choose whether to browse the information or not. Participants may generate bias through the procedure of providing informed consent and reading the survey questions, influencing their later answers. These factors could also affect the validity of the study findings. Future research should use experimental designs that better reflect real-world settings. Fourth, although the IPI model has long been used in health communication research and is valuable for addressing specific questions in this study, it primarily focuses on the indirect effects of health campaigns. However, within the context of public health communication, there are various alternative theoretical explanations for the effectiveness or ineffectiveness of health campaigns. For instance, fear appeals suggest that how information is presented by the supply side of communication can influence individuals’ emotional reactions and health behavior changes . Psychological reactance can be another relevant concept with regard to campaign failure from the recipients’ perspective. When individuals perceive health campaigns to threaten their behavioral freedom, they react in ways contrary to the campaign’s intent, resulting in communication failure . In other words, the findings from this study should be interpreted as 1 aspect of evaluating the effectiveness of health campaigns. To gain a comprehensive understanding of their effectiveness, these findings should be integrated with insights from other theoretical perspectives. Conclusions and Implications Despite these limitations, our study suggests that online health campaigns may yield desirable outcomes when civil and supportive comments are present. Moreover, social media users often engage in biased processing of health persuasion and rely heavily on their prior attitudes to guide their subsequent compliance behaviors. Unfavorable prior attitudes toward health behaviors can harm the effects of digital health communication only when individuals find civil and consistent evidence supporting their unfavorable opinions. Therefore, it is beneficial to encourage social media users to leave civil and supportive comments on digital health campaigns. In addition, misinformation and incivility in online comment sections should be moderated by relevant media platforms. Moreover, relevant information literacy programs should be delivered to the public to prevent them from being misled by biased user comments. Theoretically, this study explores the other-consciousness and self-centered perspectives of presumed influence in the context of social media health campaigns, where messages are presented together with extensive polarized and hostile user comments. People rely on online commentary and their prior attitudes to infer the presumed influence of health campaigns. This study investigated how polarized and hostile user comments below a health campaign message on social media and social media users’ polarized attitudes concurrently affected their perception of the campaign’s influence on others and their compliance with the promoted health behavior. Results showed that compared with anti–mask-wearing comments, pro–mask-wearing comments enhanced presumed influence and health compliance of mask-wearing, but incivility in the comments hindered the positive impact of pro–mask-wearing comments. Antimaskers demonstrated increased compliance when they were unable to find civil support for their opinion in the social media environment. The summary of the research hypotheses and corresponding results are presented in . First, comment slant remained a cornerstone driving individuals’ presumed influence of the mask-promoting post and their compliance with mask-wearing. Compared to pro–mask-wearing comments, anti–mask-wearing comments always reduced participants’ presumed influence of mask-wearing posts, which further weakened their behavioral intention to wear masks, regardless of comment tone and their prior attitudes toward mask-wearing. These findings suggest that comments can serve as a source of misleading information. Although attitudinal consensus is inferred from the comments left by anonymous and limited others, these comments may lead social media users to develop inaccurate beliefs that the comments reflect public opinion from people in general. These beliefs may influence their health-related compliance behaviors. In addition, incivility affected the presumed influence of a health message, but only when the comments below the message expressed supportive opinions. It is likely that pro–mask-wearing comments below the health message signal the presumed influence of the message on others’ acceptance, and incivility acts as a negative cue that hinders the exemplification effect and indicates that the highly homogeneous and consistent opinion environment depicted in the comments may not be accurate. In contrast, anti–mask-wearing comments have explicitly represented others’ resistance to the main health message, and the presence of incivility only signals a similar cue of others’ resistance. These effects of comment slant and comment tone advance the other-consciousness perspective of the IPI process in the context of digital health campaigns. Individuals’ perception of others’ reception of a media message is influenced by affordances offered by social media. Even when information on source credibility or audience size is absent, the presumed influence of social media messages still adjusts individuals’ compliance behavior accordingly. Social media users take the roles of both content producers and commenters. The opinion environment is highly prone to produce and spread misleading information due to the lack of professional gatekeepers and polarized opinion climate. The IPI model demonstrates a psychological process through which individuals’ exposure to health information and relevant discussions on social media affects their compliance with promoted health behaviors. Therefore, it is crucial to consider both the direct and indirect effects of social media comments below health-persuasive messages on public health outcomes when examining the persuasiveness of digital health communication. With concrete clues about others’ reactions to health persuasion obtained from comments, individuals no longer rely solely on their prior attitudes to infer the influence of a health campaign on others. These findings somewhat challenge the self-centric perspective of IPI. This change can be explained by the evolving media landscape. Previous studies support the self-centric perspective of IPI in the context of traditional media, where traditional media audiences have limited access to others’ reactions to a message and are compelled to rely on their prior attitudes for inference. In social media, users can directly see others’ reactions to a message. They no longer need to fully rely on personal attitudes to infer media influence on others. Comments serve as crucial sources for them to infer the influence of social media posts on others. Only when approving and civil comments are present, prior attitudes can affect behavioral intentions through individuals’ presumed media influence on others. One possible explanation is that individuals in general are subject to negative bias, that is, they are particularly susceptible to information that contains negativity or risks. Anti–mask-wearing comments or incivility impressed and influenced participants because these comments might exaggerate the negative side of mask-wearing and demonstrate hostility among commenters. Therefore, individuals’ perception of others’ reactions to the main message is believed to be influenced by negative cues rather than prior attitudes. Civil pro–mask-wearing comments suggested no cues of negativity, and individuals then relied on their prior attitudes to infer the perception of the post’s influence on others. In most cases, favorable prior attitudes toward mask-wearing directly enhance individuals’ behavioral intention compared to unfavorable prior attitudes. Nevertheless, the influence of prior attitudes on behavioral intentions can be altered by social media comments ensuing digital health communication. Specifically, civil pro–mask-wearing comments directly enhance antimaskers’ behavioral intention to wear masks more than uncivil pro–mask-wearing comments, whereas uncivil anti–mask-wearing comments turn out to enhance antimaskers’ behavioral intention to wear masks more than civil anti–mask-wearing comments. An explanation is that individuals may psychologically dissociate themselves from a group whose members belong to a relatively inferior group ; incivility is seen as impolite and undesirable, and individuals may avoid belonging to a group whose members are rude and uncivil. The findings indicate that individuals engage in biased information processing only when they find civil support for their prior opinions, regardless of whether the support is narrated in the main message or in the comments. Relatedly, while we suspected that antimask attitudes differing from the post advocacy would be associated with less presumed influence, there is a possibility that opponents of mask-wearing may adhere to conspiracy theories. Such individuals might suspect that everyone around them has been brainwashed by governmental health campaigns, thereby leading to very high presumed influence. In other words, there might be a curvilinear relationship between prior attitudes and presumed influence or a linear relationship between attitude strength and presumed influence among antimaskers. Therefore, we conducted additional tests and found that these possibilities were not supported by our data. These findings suggest that in the era of new media, where user responses to health campaigns are publicly visible, judgments about the presumed influence of a post rely more on these visible examples than on personal prior attitudes. This study has several limitations that should be acknowledged. First, we edited the comments to maintain consistent argument strength across conditions, and therefore, the level of perceived authenticity in the comments may differ. Furthermore, we used default Facebook avatars in the experimental stimuli. Uncivil social media comments coupled with default avatars may be regarded as bot accounts, given the heavily politicized discussion on mask-wearing in the United States. The perception of commenters as bots may affect the presumed influence accordingly. These two aspects suggest that the perceived unrealism of the stimuli, particularly the user comments created in this study, may reduce the validity of the findings. Given that, future studies would benefit from measuring the perceived realism of comments and controlling it as a covariate in the analyses. Second, this study focuses on the effects of user comments and prior attitudes, leaving the main effectiveness of the health campaign post unexamined. Likewise, the interaction effects between the post and its accompanying comments on polarized publics’ presumed influence and behavioral intentions remain unexplored. The combined effects of comment slant and comment tone may vary depending on the post presented together with the comments. The lack of examination of the interplay between comments and the post may hinder a nuanced understanding of the combined effects of digital information. Future studies are encouraged to consider the effectiveness of a post and its interaction with comments. Third, participants were required to read the post and accompanying comments, which may not reflect real-life scenarios where individuals may choose whether to browse the information or not. Participants may generate bias through the procedure of providing informed consent and reading the survey questions, influencing their later answers. These factors could also affect the validity of the study findings. Future research should use experimental designs that better reflect real-world settings. Fourth, although the IPI model has long been used in health communication research and is valuable for addressing specific questions in this study, it primarily focuses on the indirect effects of health campaigns. However, within the context of public health communication, there are various alternative theoretical explanations for the effectiveness or ineffectiveness of health campaigns. For instance, fear appeals suggest that how information is presented by the supply side of communication can influence individuals’ emotional reactions and health behavior changes . Psychological reactance can be another relevant concept with regard to campaign failure from the recipients’ perspective. When individuals perceive health campaigns to threaten their behavioral freedom, they react in ways contrary to the campaign’s intent, resulting in communication failure . In other words, the findings from this study should be interpreted as 1 aspect of evaluating the effectiveness of health campaigns. To gain a comprehensive understanding of their effectiveness, these findings should be integrated with insights from other theoretical perspectives. Despite these limitations, our study suggests that online health campaigns may yield desirable outcomes when civil and supportive comments are present. Moreover, social media users often engage in biased processing of health persuasion and rely heavily on their prior attitudes to guide their subsequent compliance behaviors. Unfavorable prior attitudes toward health behaviors can harm the effects of digital health communication only when individuals find civil and consistent evidence supporting their unfavorable opinions. Therefore, it is beneficial to encourage social media users to leave civil and supportive comments on digital health campaigns. In addition, misinformation and incivility in online comment sections should be moderated by relevant media platforms. Moreover, relevant information literacy programs should be delivered to the public to prevent them from being misled by biased user comments. Theoretically, this study explores the other-consciousness and self-centered perspectives of presumed influence in the context of social media health campaigns, where messages are presented together with extensive polarized and hostile user comments. People rely on online commentary and their prior attitudes to infer the presumed influence of health campaigns. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.